Our approach for building scalable volumetric models of neuronal circuits from the experimentally reconstructed morphological skeletons is illustrated by Fig. 1 and summarized in the following points:

1.
Preprocessing the individual neuronal morphologies that compose the circuit to repair any artifacts that would impact the meshing process.

2.
Creating smooth and watertight mesh models of the neurons from their morphologies.

3.
Building local volumetric models of the neurons from their mesh models.

4.
Integrating all the local volumes of the individual neurons into a single global volume dataset.

5.
Annotating, or ‘tagging’ the global volumetric model of the circuit according to the criteria specified by the in silico study. For example, in clarified fluorescence experiments [31], the neurons will be tagged with the spectral characteristics of the different fluorescent dyes that are injected intracellularly. In optogenetic experiments, the volume will be tagged with the intrinsic optical properties of the cortical tissue [32] to account for precise light attenuation and accurate neuronal stimulation [33].
Repairing morphological artifacts
The neuronal morphologies are reconstructed from imaging stacks obtained from different microscopes. These morphologies can be digitized either with semiautomated [34] or fully automated [35] tracing methods [25, 36]. The digitization data can be stored in multiple file formats such as SWC and the Neurolucida proprietary formats [37, 38]. For convenience, the digitized data are loaded, converted and stored as a tree data structure. The skeletal tree of a neuron is defined by the following components: a cell body (or soma), sample points, segments, sections, and branches. The soma, which is the root of the tree, is usually described by a point, a radius and a twodimensional contour of its projection onto a plane or a threedimensional one extracted from a series of parallel cross sections. Each sample represents a point in the morphology having a certain position and the radius of the corresponding cross section at this point. Two consecutive samples define a connected segment, whereas a section is identified by a series of nonbifurcating segments and a branch is defined by a linear concatenation of sections. Figure 1a illustrates these concepts.
Due to certain reconstruction errors, morphologies can have acute artifacts that limit their usability for meshing. In this step, each morphological skeleton is investigated and repaired if it contains any of the following artifacts:

1.
Disconnected branches from the soma (relatively distant); where the first sample of a firstorder section is located far away from the soma.

2.
Overlapping between the connections of firstorder sections at the soma.

3.
Intersecting branches with the soma; where multiple samples of the branch are located inside the soma extent.
These issues can severely deform the reconstructed threedimensional profile of the soma, affect the smoothness of firstorder branches of the mesh and potentially distort the continuity of the volumetric model of the neuron. The disconnected branches were fixed by repositioning the far away samples closer to the soma. The new locations of these samples were set based on the most distant sample that is given by the twodimensional profile of the soma. For example, if the first order sample is located at 20 micrometers from the center of the soma, while the farthest profile point is located at 10 micrometers, then the position of this sample is updated to be located within 10 micrometers from the center along the same direction of the original sample.
The algorithm for creating a mesh for the soma is based on a deformation of an initial mesh into a physically plausible shape. Two branches influencing the same vertices of the initial mesh give rise to severe artifacts. Therefore, if two firstorder branches or more overlap, the branch with largest diameter is marked to be a primary branch, while the others are ignored for this process. Finally, the samples that belong to firstorder branches and are contained within the soma extent are removed entirely from the skeleton.
Meshing: from morphological samples to polygons
In general, creating an accurate volumetric representation of a surface object requires a polygonal mesh model with certain geometrical aspects; the mesh has to be watertight, i.e. non intersecting, twomanifold [39]. Unfortunately, creating a single smooth, continuous and watertight polygonal mesh representation of the cell surface from a morphological skeleton is more difficult than it seems. Reconstructing a mesh model to approximate the soma surface is relatively simple, however, the main issues arise when (1) connecting firstorder branches to the soma and (2) joining the branches to each others. Apart from the intrinsic difficulties, morphological reconstructions from wet lab experiments are not traced with membrane meshing in mind. Therefore, they may contain features and artifacts that can badly influence the branching process even if the artifacts are completely repaired. In certain cases, some branches can have extremely short sections with respect to their diameters or unexpected trifurcations that can distort the final mesh.
The existing approaches for building geometric representations of a neuron are not capable of creating a smooth, continuous and watertight surface of the cell membrane integrated into a single mesh object. In neuroConstruct, the neuron is modeled with discrete cylinders, each of them represents a single morphological segment [26]. By definition, the cylinders are watertight surfaces, however, this technique underestimates the actual geometric shape of the branches. It introduces gaps or discontinuities between the segments that are not colinear. In contrast, the method presented by Lasserre et al. can be used to create high fidelity and continuous polygonal meshes of the neurons, but the resulting objects from the meshing process are not guaranteed to be watertight. Their algorithm resamples the entire morphological skeleton uniformly, and thus, the resampling step cannot handle bifurcations that are closer than the radii of the branching sections. Moreover, the somata are not reconstructed on a physicallyplausible basis to reflect their actual shapes. This issue has been resolved by the method discussed by Brito et al. [40]. They can also build watertight meshes for the branches, but their approach can be valid only if the morphological skeleton is artifactfree. The watertightness of the resulting meshes is not guaranteed if the length of the sections are relatively smaller than their radii or when two firstorder branches are overlapping.
We present a novel approach to address the previous limitations and build highly realistic and smooth polygonal mesh models that are watertight ‘piecewise’. The resulting mesh consists of multiple ‘separate’ and ’overlapping’ objects, where each individual object is continuous and watertight. In terms of voxelization, this piecewise watertight mesh is perfectly equivalent to a single connected watertight mesh that is almost impossible to reach in reality. The overlapping between the different objects guarantees the continuity of the volumetric model of the neuron, Fig. 1g and 1i. The final result of the voxelization will be correct as long as the union of all the pieces provides a faithful representation of each component of the neuron. The mesh is split into three components: (1) a single object for the soma, (2) multiple objects for the neurites (or the arbors) and (3) (optionally) multiple objects for the spines if that information is available.
Soma meshing In advanced morphological reconstructions, the soma is precisely described by a threedimensional profile that is obtained at multiple depths of field [41]. In this case, the soma mesh object can be accurately created relying on the Possion surface reconstruction algorithm that converts sufficientlydense point clouds to triangular meshes [42]. However, the majority of the existing morphologies represent the soma by a centroid, mean radius and in some cases a twodimensional profile, and thus building a realistic soma object is relatively challenging [36].
Lasserre et al. presented a kernelbased approach for recovering the shape of the soma from a spherical polygonal kernel with 36 faces [14]. The firstorder branches of the neurons are connected to their closest free kernel face, and then the kernel is scaled up until the faces reach their respective branches. The resulting somata are considered a better approximation than a sphere, but they cannot reflect their actual shapes. Brito et al. have discussed a more plausible approach for reconstructing the shape of the soma based on mass spring system and Hook’s law [40, 43, 44]. Their method simulates the growth of the soma by pulling forces that emanate the firstorder sections. However, their implementation has not been open sourced to reuse it.
We present a similar algorithm for reconstructing a realistic threedimensional contour of the soma implemented with the physics library from Blender [30, 45]. The algorithm simulates the growth of the soma by deforming the surface of a soft body sphere that is based on a mass spring model. The soma is initially modeled by an isotropic simplicial polyhedron that approximates a sphere, called icosphere [46]. The icosphere is advantageous over a UVmapped sphere because (1) the vertices are evenly distributed and (2) the geodesic polyhedron structure distributes the internal forces throughout the entire structure. As a tradeoff between compute time and quality, the subdivision level of the icosphere is set to four. The radius of the icosphere is computed with respect to the minimal distance between the soma centroid and the initial points of all the firstorder branches.
Each vertex of the initial icosphere is a control point and each edge represents a spring. For each firstorder section, the initial crosssection is spherically projected to the icosphere and the vertices within this projection are selected to create a hook modifier, which is an ensemble of control points than remains rigid during the simulation. Before the hook is created, all the faces from the selected vertices are merged to create a single face that is reshaped into a circle with the same radius as the projected radius of the crosssection. During the simulation, each hook is moved towards its corresponding target section causing a pulling force. At the same time, the connecting polygons are progressively scaled to match the size of the final crosssection at destination point. This simulation is illustrated in Fig. 2. If two or more firstorder sections or their projections overlap, only the section with the largest diameter is considered. The other will be extended later to the soma centroid during the neurite generation.
Neurite meshing To mesh a neurite, we first divide the morphology in a set of branches (concatenated non bifurcating sections) that span the entire morphological tree, Fig. 1e. The algorithm starts the first branch from the firstorder section of the neurite. At the first bifurcation the section with the largest crosssection at the starting sample is chosen as the continuing section for the ongoing branch, the rest are placed in a stack. The algorithm proceeds to the next bifurcation and repeats until a terminal section is reached. Once the branch is completed, the first section in the stack is popped and a new branch is created from there. The algorithm finishes when all sections have been processed.
Each branch is meshed separately using a polyline and a circle bevel which is adjusted to the branch radius at each control point, Fig. 1f. The initial branch of each neurite is connected to the centroid of the soma with a conic section. For most branches this connection will not be visible, but it is necessary for those ones that were overlapping a thicker branch and did not participate in the soma generation. The whole algorithm requires only local information at each step so it runs very quickly and in linear time in relation to the number of sections.
Voxelization: from polygonal to volumetric models
A straightforward approach to voxelize an entire neuronal circuit of a few hundred or thousand neurons is to create a polygonal mesh for each neuron in the circuit, merge all of them in a single mesh and feed that mesh into an existent robust solid voxelizer. However, this approach is infeasible due to the memory requirements needed to create the single aggregate mesh model of all neurons. We propose a novel and efficient CPUbased method for creating those volumetric models without the necessity of building joint models of neurons. We use a CPU implementation to not restrict the maximum volume data size to the memory of an acceleration device, e.g. a GPU [47–49]. To reduce the memory requirements of our algorithm, we use binary voxelization to store the volume (1 bit per voxel).
The volume is created in four steps: (1) computing the dimensions of the volume, (2) parallel surface voxelization for the piecewise meshes of all the neurons in the circuit, (3) parallel and slicebyslicebased solid voxelization of the entire volume, and finally (4) annotating the volume.
The spatial extent of the circuit is obtained by transforming the piecewise mesh of each neuron to global coordinates, computing its axisaligned bounding box, and finally calculating the union bounding box of all the meshes. The size of the volume is defined according to the circuit extent and a desired resolution. The volumetric shell of each component of the mesh is obtained with surface voxelization, Fig. 1g. This process rasterizes all the pieces conforming a mesh to find their intersecting faces with the volume. This step is easily parallelizable, as each cell can be processed independently. We only need to ensure that the set operations in the volume dataset are threadsafe.
Afterwards, the extracellular space is tagged by floodfilling the volume resulting from surface voxelization [50]. To parallelize this process, we have used a twodimensional floodfilling algorithm that can be applied for each slice in the volume, Fig. 1h. and the final volume is created by inverting the floodfilled one to discard the intersecting voxels in the volume, Fig. 1i.