Document Description

Mesh connectivity compression using convection reconstruction

Document Share

Document Transcript

Mesh Connectivity Compression Using Convection Reconstruction
Rapha¨elle Chaine
1
, Pierre-Marie Gandoin
2
and C´eline Roudet
11
LIRIS - UCB Lyon 1 - Bˆatiment NAUTIBUS - 8 bd Niels Bohr - 69622 Villeurbanne Cedex - France
2
LIRIS - Universit´e Lumi`ere Lyon 2 - 5 av Pierre Mend`es-France - 69676 Bron Cedex - France
FirstName.LastName@liris.cnrs.fr
Abstract
During a highly productive period running from 1995 to about2002, the research in lossless compression of 3D meshes mainlyconsisted in a hard battle for the best bitrates. But for a few years,compression rates seem stabilized around 1
.
5 bit per vertex for theconnectivity coding of usual meshes, and more and more work isdedicated to remeshing, lossy compression, or gigantic mesh com-pression, where memory and CPU optimizations are the new prior-ity. However, the size of 3D models keeps growing, and many ap-plication ﬁelds keep requiring lossless compression. In this paper,we present a new contribution for single-rate lossless connectivitycompression, which ﬁrst brings improvement over current state of the art bitrates, and secondly, does not constraint the coding of thevertex positions, offering therefore a good complementarity withthe best performing geometric compression methods. The initialobservation having motivated this work is that very often, most of the connectivity part of a mesh can be automatically deduced fromits geometric part using reconstruction algorithms. This has alreadybeen used within the limited framework of projectable objects (es-sentially terrain models and GIS), but ﬁnds here its ﬁrst generaliza-tion to arbitrary triangular meshes, without any limitation regard-ing the topological genus, the number of connected components,the manifoldness or the regularity. This can be obtained by con-straining and guiding a Delaunay-based reconstruction algorithmso that it outputs the initial mesh to be coded. The resulting ratesseem extremely competitive when the meshes are fully included inDelaunay, and are still good compared to the state of the art in thegeneral case.
CR Categories:
I.3.5 [Computing Methodologies]: ComputerGraphics—Computational Geometry and Object Modeling
Keywords:
Mesh, Compression, Reconstruction, Lossless, Con-nectivity
1 Introduction
For several years, meshes have played a leading role in computergraphics, and their ability to model the world throws them in theheart of advanced applications in all the ﬁelds of science, arts orleisure. If some of these applications can tolerate a limited lossof information (provided that this loss is well controlled and doesnot damage a certain visual realism), the others, for practical oreven legal reasons, impose to work continuously with exact copiesof srcinal objects. In this case, the only way of optimizing thestorage and the transmission of 3D information is to have recourseto lossless compression methods.A mesh is deﬁned by a set of points (we speak of the geometryof the mesh), and by a combinatorial structure describing the re-lations between these points, using geometric objects of higher di-mensions : edges, facets, polyhedra (we speak of the connectivityor the topology of the mesh). To this fundamental information aresometimesadded attributesallowingtoimprove therendering : nor-mals, colors or textures. If we put aside these attributes (they relateonly to a subset of the meshes met in practice), all the difﬁculty forcompressing 3D objects is to process with an equal efﬁciency thegeometry and the connectivity of a wide class of meshes. However,most of the methods proposed so far stress on one of these two as-pects (usually the connectivity), generally to the detriment (or atleast, not to the advantage) of the other one, which is constrainedby a description order rarely optimal in terms of compression. Inthis article, we propose a new single-rate connectivity coder whichis not only general and efﬁcient, but also does not impose any con-straint on the coding of the geometry. In particular, it is totallycompatible with the position coders that currently propose the bestcompression rates.We took it as given that very often, the main part of a mesh connec-tivity can be deduced from its vertex set using reconstruction meth-ods. Thisremark hasalready been made inthe past, but theproblemwas to ﬁnd an effective way of describing the difference betweenthe initial mesh and its reconstructed version. Indeed, within theframework of lossless compression, it is indispensable to be ableto correct these errors in order to obtain a decoded mesh perfectlyidentical to the srcinal mesh. But rather than describe the differ-ence to the initial object at the end of the reconstruction phase, wesuggest adapting an existing algorithm so that it can accept occa-sional codes modifying its default behavior at the moments when itwould commit an ”error” of reconstruction with regard to the initialmesh.Having placed our work in the historical context of 3D compres-sion and 3D reconstruction (Section 2), we will expose the gen-eral principle of the method (Section 3), ﬁrst within a restrictedframework, then generalized to arbitrary triangular meshes (regard-ing the genus, the number of connected components, the regularityand the manifoldness). Then we will detail the coding techniquesand optimization steps leading to better compression rates (Section4), before presenting some comparative results and comments onthe method performances (Section 5).
2 Context and Previous Works
2.1 Mesh Compression
As mentioned above, a mesh is composed of both a geometric part(the vertex positions), and a topological part (the vertex connectiv-ity). Now, by looking at the whole scientiﬁc production in meshcompression since 1995 until now, we notice clearly that the con-nectivity coding has motivated most of the proposed methods. Theusual scheme consists in describing the combinatorial structure byenumerating its vertices in a precise order, designed to minimizethe size of the code : each vertex is coupled with a variable sizesymbol deﬁning the way it is connected to the rest of the mesh.Consequently, the geometric coder has to work with this predeﬁned
order of the vertices, but it can exploit the connectivity to predicttheir location. The position of the currently transmitted vertex isestimated from its neighbors, and the prediction error is the sole in-formation to be coded. But the order imposed by the connectivitycoding is not necessarily favorable to this prediction, as shown bythe results obtained on usual meshes by classical algorithms : thebest of them reduce the connectivity information to less than 1 or2 bits per vertex, while for the geometric part, the rates rarely godown under 90% of the initial size (except for very low quantiza-tions). Among the works that follow this principle, we focus here insingle-rate methods, which code and decode the mesh in one passand do not allow progressive visualization [Deering 1995; Evanset al. 1996; Taubin and Rossignac 1998; Gumhold and Strasser1998; Touma and Gotsman 1998; Li and Kuo 1998; Bajaj et al.1999a; Bajaj et al. 1999b; Gumhold et al. 1999; Rossignac 1999;Rossignac and Szymczak 1999; King and Rossignac 1999; Isen-burg and Snoeyink 1999; Isenburg 2000; Alliez and Desbrun 2001;Lee et al. 2002; Coors and Rossignac 2004; Kaelberer et al. 2005;Jong et al. 2005]. To understand how these methods compare, thereader can also refer to the following surveys [Alliez and Gotsman2004; Gotsman et al. 2002; Peng et al. 2005]. It is worth to men-tion here that the relative stability in compression rates observedthese days can be explained by the new interest of the communityfor gigantic meshes : in this framework, the challenge is to improvethe efﬁciency of the compression in terms of CPU and memory re-quirements rather than in terms of bitrates [Isenburg and Gumhold2003; Isenburg and Lindstrom 2005; Isenburg et al. 2005].
2.2 Prioritizing the Geometric Coding
After this ﬁrst wave of works, in the knowledge that the geometryof a mesh weighs generally much more than its connectivity, the re-searchers began topropose methods giving thepriorityto geometriccompression. Some of them deviate fromthe losslessframework byproposing algorithmsthatoftenimposeacompleteremeshing oftheobject before applying spectral decomposition tools like wavelets,subdivision schemes, or classic geometric methods of compression[Grosset al. 1996; Certain et al.1996; Lounsbery et al. 1997; Staadtet al. 1998; Guskov et al. 2000; Khodakovsky et al. 2000; Kho-dakovsky and Guskov 2000; Karni and Gotsman 2000; Szymczak et al. 2002; Attene et al. 2003].On the other hand, the works introduced by Gandoin and Devillersand improved by Peng and Kuo [Devillers and Gandoin 2000; Gan-doin and Devillers 2002; Peng and Kuo 2005] describe a progres-sive and lossless compression method, centered on the geometry,which interests us particularly within the framework of this arti-cle. Indeed, this method was srcinally designed to code an un-structured point cloud, with the underlying idea that in many cases,the connectivity information could be deduced from the geometrythanks to reconstruction algorithms. So, the ﬁrst version of thismethod [Devillers and Gandoin 2000] proposed a multiresolutioncompression algorithm for an unstructured point cloud, guarantee-ing a theoretical minimal gain of
n
(
log
2
n
−
2
.
4
)
(where
n
denotesthenumber of points), andvery competitive practical performances,even compared to the best single-rate algorithms. The method wasthen extended to deal with the connectivity compression, while re-maining centered on the geometry [Gandoin and Devillers 2002].Indeed, the kd-tree decomposition scheme involved in the heart of the geometric coder, is enriched with a connectivity coder that usessome of the classical mesh simpliﬁcation operations introduced byHoppe
et al.
[Hoppe 1996; Popovi´c and Hoppe 1997]). Eventually,the method has recently been resumed by Peng and Kuo [Peng andKuo 2005] who improve its performances by imposing a priorityon the cell subdivision order in the octree, and by proposing a moreeffective prediction model for the point distribution in the sub-cellsgenerated by a subdivision.
2.3 Compression and Reconstruction
The idea to entrust a reconstruction algorithm with the task of com-puting the connectivity of a mesh from its vertices has already beenused in the context of mesh compression. Within the framework of terrain models, Kim
et al.
[Kim et al. 1999] suggest transmittingonly a fraction of the edges composing the object, and using con-strained Delaunay triangulation — in 2D, since the terrain modelsare projectable — to ﬁnd the whole connectivity. Devillers andGandoin [Devillers and Gandoin 2000] also mention the compres-sion of GIS as an application of their geometric coder, and developanedge coder well suited for this context, which results incompres-sionrates aslow as0
.
2 bitsper vertex for thecomplete connectivity.Besides, Devillers
et al.
show that the minimal set to be transmittedto guarantee the exact reconstruction of the initial model by con-strained Delaunay triangulation is constituted by the edges that arenon locally Delaunay [Devillers et al. 2003].Unfortunately, the generalization from 2.5D to 3D meshes is notstraightforward : since the mesh is not projectable any more, theuse of the constrained Delaunay triangulation is impossible. Nev-ertheless, the idea to use a reconstruction method driven by a par-tial description of the connectivity remains extremely promising interms of compression results. Provided that it could be possible toﬁndamethod bothpowerful andcapable of takingadvantage ofpar-tial connectivity data to guarantee an exact reconstruction whateverthe initial model may be.A ﬁrst attempt to use a reconstruction algorithm to encode connec-tivity information has been proposed by Lewiner
et al.
[Lewineret al. 2005]. The geometry is coded independently, through a kd-tree based algorithm derived from [Gandoin and Devillers 2002]and [Botsch et al. 2002] and the connectivity is coded through anadvancing front triangulation inspired of the ball-pivoting strategy[Bernardini et al. 1999]. This algorithm requires a code for eachedge of an active border that is initialized to a triangle. If the ge-ometry of the mesh meets good sampling conditions, the entropyof each code will be extremely low. In Section 5, we compare thismethod to ours regarding the compression rates.
2.4 Reconstruction by Convection
The problem of reconstructing a surface from a set of points hasreceived considerable attention during the last decade [Mencl andM¨uller 1998]. Interesting and outstanding algorithms have been is-sued both in computer graphics and computational geometry, butwe have decided to focus on the algorithms of the second category,since their combinatorial concerns are more suitable for losslesscompression purposes. Most ofcomputational geometryalgorithmsexploit the geometric properties of structures such as the Delaunaytriangulation, the Voronoi diagram or the power diagram of the in-put point set, assuming auspicious properties on the way they weresampledonthesurface(
ε
-sample[AmentaandBern1999]). Acon-sistent set of facets can then be extracted from the geometric struc-ture, sometimes using global heuristics or local analysis to solveambiguities. The existing algorithms are numerous and an attemptto classify and distinguish them has recently been proposed in thestate of the art by Cazals and Giesen [Cazals and Giesen 2002].Most of these algorithms produce triangular meshes, which makesthem good candidates to use for compression purposes.The convection algorithm we use in our method is based on a sim-ilar notion of ﬂow as it was developped in the Wrap algorithm byEdelsbrunner [Edelsbrunner 2002], and theﬂow complex algorithmby Giesen and John [Giesen and John 2002]. Indeed, this recon-struction algorithm has been inspired from a convection processdescribed by Zhao
et al.
[H.K.Zhao et al. 2001]. They use it to ini-
tialize their surface before running an energy minimization processin the level set framework. Given an evolving surface
S
enclosingan input point set
P
, the convection process makes each point of
S
move inwards, along the direction of its normal, towards its closestpoint in
P
. However, the numerical scheme they propose can betranslated in Delaunay, to make it depend on the geometry of theinput data set only, and not on the precision of some grid aroundthe surface. A demonstration of this result is presented by Chaine[Chaine 2003] together with a subsequent convection algorithm. ADelaunay triangulation of
P
is a partition of space into tetrahedraso that the ball circumscribed to each tetrahedron does not containany point of
P
. The convection process is run directly in the 3DDelaunay triangulation of
P
, with an evolving surface
S
composedof oriented Delaunay facets.
S
is initialized to the convex hull of
P
. An oriented facet of
S
that does not meet the oriented Gabrielproperty —
i.e.
its inwards diametral half-sphere is not empty — isattracted towards the 3 inner facets in the incident Delaunay innertetrahedra (see Fig. 1, for an illustration in 2D where the evolvingsurface is replaced by an evolving curve, and the Delaunay tetra-hedra are replaced by Delaunay triangles). During the convectionprocess, thin parts can appear (see Fig. 1, c and d), on which a 2Dsurface based version of the convection is run. A deeper explana-tion of this algorithm will be presented in Section 3 while revisitingit for compression purposes.
(a) (b)(c) (d)
Figure 1: Geometric convection on a 2D point set : (a) the evolvingcurve
C
is initialized to the convex hull, (b)
C
locally evolves atthe level of an edge iff the half-circle associated to this edge is notempty, (c) result of theinitial convection process, (d) the convectionprocess is locally enhanced to hollow a pocket out.An interesting property of the convection algorithm is that it isdriven locally, in the Delaunay triangulation of the points, withoutinvolving a global heuristic. The topology of the evolving surfacemay change during the convection process, so that it can handlesurfaces with boundaries and surfaces of high genus. A drawback of the convection process is that it can locally be stuck in presenceof pockets [Edelsbrunner et al. 1998] hiding facets, but a simpleand local point density analysis permits to hollow them out [Chaine2003].
3 Principle of the Compression Algorithm
3.1 Compliant Meshes
The beneﬁt of using a 3D reconstruction method for compressionpurposes isdouble : ﬁrst, this allowsto obtain very low costs for thecoding of the connectivity — ideally, a null cost if the reconstruc-tion algorithm is able to ﬁnd the exact structure of the srcinal meshby itself —, and secondly, unlike the previous methods of topologiccompression, no constraint is imposed on the order of the verticesto the geometric coder, which constitutes an important efﬁciencytoken.The main difﬁculty consists in being able to help the reconstructionalgorithm at a reasonable cost : indeed, it is highly improbable todesign an algorithm capable of ﬁnding the complete connectivity of a mesh from its vertex set. It is thus necessary to be able to alter thecourse of the reconstruction process, by occasionally changing thedefault behavior of the algorithm to drive it in a sure way to a resultknown in advance : the structure of the initial mesh.Among the plethora of available methods, the reconstruction byconvection is distinguishable from others by two important assets :its qualities in terms of reconstruction — practical accuracy andfaithfulness to the srcinal model, handling of complex topologies,computation times —, and above all, its ability to be effectivelydriven by means of simple instructions. The ﬁrst asset guaranteesthat the algorithm will not need to be guided too often (small num-ber of codes), the second guarantees that this can be done at lowercost (compactness of the codes).In return, as many algorithms in computational geometry, the con-vection algorithm is based on a Delaunay triangulation, and it maybe difﬁcult to force it out of this structure. This imposes a conditionover the initial mesh : all its facets have to belong to the 3D Delau-nay triangulation of its vertex set. It is quite a strong property thatis not veriﬁed by all the 3D objects met in practice. We will see insecond stage (Section 3.3) how to break it.Intuitively, the reconstruction by convection consists in embeddingthe vertex set in a block of marble that will be sculpted gradually,facet after facet, tetrahedron after tetrahedron, until it takes on theexact shape of the initial object. The algorithm begins by prepar-ing the sculpting material : an enclosing block which is nothingelse than the convex hull of the point cloud, as well as the network of galleries through which it is going to dig until the initial shapeis reached. This network is composed of the tetrahedra of the 3DDelaunay triangulation, and every stage towards the object shapeconsists in examining a new facet of the current surface and decid-ing if it is necessary or not to open it and excavate the inside of its associated tetrahedron. When a facet is opened, it is removedfrom the current surface of the 3D object under construction, andreplaced by the three other facets of the excavated tetrahedron.As mentioned above, the criterion that decides on this opening ispurely local : it consists in observing whether the Gabriel half-sphere associated to the current oriented facet contains or not thefourth vertex of the tetrahedron. If it is the case, the oriented facetis opened and replaced in the current surface by the 3 other facetsof the associated tetrahedron. (If this one is already excavated, thesurface locally vanishes through the current facet.) Otherwise, it ismaintained on the surface.Note that if all the facets of the initial object are in its 3D Delaunaytriangulation, they are reachable by this sculpture process from theconvex hull. The problem is to make sure that the algorithm willnot dig a facet belonging to the initial mesh, or that, conversely,it will not be retained before having reached such a facet. Hencethe need for additional codes allowing to modify the behavior of
the convection algorithm, and to drive it towards the object to becoded. Even if the convection algorithm was designed to computewith a good accuracy the structure of a mesh sufﬁciently dense, itwill most likely need occasional assistance to guarantee the perfectreconstruction of any mesh.To present our method in a progressive way, we focus in this sec-tion on a mesh
M
verifying the following two assumptions : all itsfacets are in the 3D Delaunay triangulation of its vertex set, and itis manifold, that is to say without borders nor thin parts. (This iswhat we call a ”compliant” mesh.)Under these assumptions, here is the general principle of ourcompression algorithm :1 Build the 3D Delaunay triangulation
D
of the point set
P
,2 Mark the facets of
D
belonging to the initial mesh
M
,3 Initialize the current surface
S
with the convex hull of
P
(in-cluded in
D
),4 Launch the convection process on
S
: every oriented facet
f
in
S
is examined in turn, and depending on whether its Gabrielhalf-sphere is empty or not,
f
is, by default, maintained on
S
or replaced by its 3 neighbors in
D
. To modify this defaultbehavior of the convection reconstruction, it is necessary tolocate the oriented facets of
S
for which the algorithm has toact differently. It is thus indispensable to deﬁne a completelydeterministic traversal of the facets in
S
, so that the
n
th
ori-ented facet met during the compression would also be the
n
th
oriented facet met during the decompression. More generally,it is necessary to ensure the synchronization of the algorithmsof compression and decompression so that the
n
th
action of the coder matches the
n
th
action of the decoder. Thanks tothese reference marks, the behavior of the convection processcan be safely altered in the following two circumstances :a) when the convection asks for the opening of a facet
f
that belongs to
M
, the coder forbids this opening, andcodes the index of
f
(more exactly, the moment when
f
is met in the algorithm) to warn the decoder that at thisprecise moment of the reconstruction, it has to break the rules of the convection algorithm. We will call thisa RDG event (Retain Despite the Geometry),b) at the end of this ﬁrst step, it is possible that some ori-ented facets of
S
— those whose Gabriel half-spheresare empty and thus the convection did not decide todig —, do not belong to
M
. Therefore, it is necessaryto specify to the decompression algorithm that theseoriented facets must be forced, against the convectionrules : for each remaining oriented facet of
S
notbelonging to
M
, the coder transmits its index (
i.e.
the moment when it has been met) and relaunch theconvection process by forcing its opening. We will callthis a ODG event (Open Despite the Geometry). Notethat by relaunching the convection process locally,some facets that were due to be forced can disappearby autointersection of
S
.A detailed description of step 4 can be given through a recursivefunction
Convection
applied to the current surface
S
. For each ori-ented facet
f
, let
f
asso
denote the oriented facet associated to
f
, thatistosay the oriented facet constructed on the same verticesas
f
, butwith the opposite orientation. When
f
and
f
asso
are both in the sur-face
S
, that means they belong to a thin part of
S
. Besides, let
f
1
,
f
2
and
f
3
denote the 3 neighbors of
f
in the 3D Delaunay triangulation
D
,
i.e.
the 3 other facets of the tetrahedron that is removed when
f
is opened. Finally, we would like to draw the reader’s attentionto the fact that the reference marks transmitted to the decoder arenot exactly absolute moments of events in the compression process,but rather intervals between two such moments (which explains thepresence of the instructions ”
time
←
0” in the detailed algorithms).This well-known technique of differential coding allows to reducethe size of the transmitted codes.Function
Convection
(
S
:
Surface
)
:
while
S
=
/ 0
do
f
←
pop ﬁrst oriented facet in
S
if
Gabriel half-sphere of
f
is empty
then
push
f
at the end of
S
temp
elseif
f
∈
M
then
{
RDG event
}
output
timetime
←
0push
f
at the end of
S
final
elseif
f
such that
f
asso
∈
S
(resp.
S
temp
)
then
remove
f
asso
from
S
(resp.
S
temp
)
else
push
{
f
1
,
f
2
,
f
3
}
at the end of
S
end if
time
←
time
+
1
end if end if end while
With this recursive deﬁnition of the
Convection
function, the step 4of our algorithm amounts to the following :Main function :
S
←
convex hull of
PS
border
←
/ 0
{
set of facets creating a thin part
}
S
final
←
/ 0
{
set of facets that are in
M
}
S
temp
←
/ 0
{
set of facets candidates to be in
M
}
time
←
0
Convection
(
S
)
while
S
temp
=
/ 0
do
f
←
pop ﬁrst oriented facet in
S
temp
if
f
such that
f
asso
∈
S
temp
then
remove
f
asso
from
S
temp
push
{
f
,
f
asso
}
at the end of
S
border
elseif
f
∈
M
then
push
f
at the end of
S
final
time
←
time
+
1
else
{
ODG event
}
output
timetime
←
0
S
←{
f
1
,
f
2
,
f
3
}
Convection
(
S
)
end if end if end while
At the end of the compression algorithm, the convection has beendriventowards the initialmesh
M
, and thecorrecting data havebeenstored for the decompression algorithm. However, a last stage re-mains that consists in cleaning thin parts possibly generated by theconvection. Indeed, since we ﬁrst assumed that the initial mesh wasmanifold, it sufﬁces to delete all these thin parts, that is to say eachfacet of
S
whose associated facet is also on
S
. In the algorithmdescribed above, the thin parts exactly match the set
S
border
.
3.2 Non Manifold Meshes
In this section, we are going to adapt the previous algorithm to nonmanifold, thin parts meshes. It sufﬁces to modify the ﬁnal stage :it is no longer possible to delete the thin parts created by the algo-rithm, because facets belonging to the initial mesh could be lost.This stage is thus replaced by a new convection process, but thistime in its 2D version, and on the thin parts only. In this frame-work, the convection updates a curve
C
constituted by the edgescomposing the boundaries of the thin parts. The oriented edges of
C
are examined in turn : an oriented edge whose Gabriel half-circleisempty will be kept on
C
, whereas an oriented edge in the oppositecase will be removed and replaced in
C
by the two other orientededges of its incident triangle (
e
1
and
e
2
in the algorithm detailedbelow).The general principle remains the same as in the 3D version of thealgorithm, and each edge that is opened against the convection rules(ODG event) launches a new 2D convection process. Note thatthere is no set
C
border
similar to the previous
S
border
. Here is adetailed version of the portion of the compression algorithm dedi-cated to the processing of the thin parts. It adds to the steps 1 to 4described in the previous section :5 Processing of thin partsFunction
Convection
2
D
(
C
:
Curve
)
:
while
C
=
/ 0
do
e
←
pop ﬁrst oriented edge in
C
if
Gabriel half-circle of
e
is empty
then
push
e
at the end of
C
temp
elseif
e
∈
M
then
{
RDG event
}
output
timetime
←
0push
e
at the end of
C
final
elseif
e
such that
e
asso
∈
C
(resp.
C
temp
)
then
remove
e
asso
from
C
(resp.
C
temp
)
else
push
{
e
1
,
e
2
}
at the end of
C
end if
time
←
time
+
1
end if end if end while
Main 2D function :
C
←
boundaries of
S
border
C
final
←
/ 0;
C
temp
←
/ 0;
time
←
0
Convection
2
D
(
C
)
while
C
temp
=
/ 0
do
e
←
pop ﬁrst oriented edge in
C
temp
if
e
such that
e
asso
∈
C
temp
then
remove
e
asso
from
C
temp
elseif
e
∈
M
then
push
e
at the end of
C
final
time
←
time
+
1
else
{
ODG event
}
output
timetime
←
0
C
←{
e
1
,
e
2
}
Convection
2
D
(
C
)
end if end if end while
3.3 Non Delaunay Meshes
As previously described, the method can only be applied to mesheswhose facets all belong to the 3D Delaunay triangulation of theirvertex set. Indeed, we saw that this structure constituted the sup-port of the convection algorithm, and that it was thus impossiblefor the current surface to reach a non Delaunay facet. Nevertheless,most of the meshes met in practice contain a fraction of non De-launay facets (see the unfavorable case of Fig. 2). So, to make themethod widely usable, it is necessary to manage the coding of suchfacets. A simpleand efﬁcient waytodo thisistocode explicitelyallthe non Delaunay facets of the initial mesh in the same time as itsvertices, using the method of Gandoin and Devillers [Gandoin andDevillers2002], whichwe willnote GD inthefollowing. More pre-cisely, non Delaunay facets constitute patches on the surface of themesh. Before launching the convection process, the connectivity of these patches is transmitted to the GD coder. Then, the algorithmpreviously described is applied to the mesh minus the non Delau-nay facets. Consequently, even if the initial mesh is manifold, theconvection process is going to perform on a mesh with boundaries.As shown in the previous section, it is no theoretical problem, butrisks though to result in a signiﬁcant increase of the number of driv-ing codes. Indeed, each facet of the mesh will be reached twice,ﬁrst through a facet oriented outwards, then through the associatedoriented facet lying on the internal side of the object surface. Wepropose several heuristics in order to limit this phenomenon. Theintuitive idea is to block temporarily the convection surface when itis about to cross a patch — a facet opening is delayed if the tetrahe-dron behind it intersects the mesh —, so as to favor the discovery of the initial mesh facets from outside (see Fig. 3). Thus, transmittingthe total number of mesh facets to the decoder will allow to stop theconvection process as soon as they are all discovered, which willdrastically reduce the number of corrective codes.Figure 2: Fandisk : complete (12946 facets), then showing the nonDelaunay facets only (1104 facets representing 8
.
5% of the set)

Similar documents

Search Related

Image compression using wavelete networksRural area Connectivity using broadband wirelRural area Connectivity using broadband wirelMorphometric Analysis using GIS TechniquesVirtual ReconstructionFace recognition using MATLABPost Catastrophe ReconstructionIndo-european language reconstructionProto-Indo-European reconstructionLanguage Reconstruction

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks