Refine registration
Input arguments
This script runs with python run_system.py [config] --refine
. In [config]
,
["path_dataset"]
should have subfolders fragments
which stores fragments
in .ply
files and a pose graph in a .json
file.
The main function runs local_refinement
and optimize_posegraph_for_scene
.
The first function performs pairwise registration on the pairs detected by
Register fragments. The second function performs
multiway registration.
Fine-grained registration
63# examples/python/reconstruction_system/refine_registration.py
64
65def multiscale_icp(source,
66 target,
67 voxel_size,
68 max_iter,
69 config,
70 init_transformation=np.identity(4)):
71 current_transformation = init_transformation
72 for i, scale in enumerate(range(len(max_iter))): # multi-scale approach
73 iter = max_iter[scale]
74 distance_threshold = config["voxel_size"] * 1.4
75 print("voxel_size {}".format(voxel_size[scale]))
76 source_down = source.voxel_down_sample(voxel_size[scale])
77 target_down = target.voxel_down_sample(voxel_size[scale])
78 if config["icp_method"] == "point_to_point":
79 result_icp = o3d.pipelines.registration.registration_icp(
80 source_down, target_down, distance_threshold,
81 current_transformation,
82 o3d.pipelines.registration.TransformationEstimationPointToPoint(
83 ),
84 o3d.pipelines.registration.ICPConvergenceCriteria(
85 max_iteration=iter))
86 else:
87 source_down.estimate_normals(
88 o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
89 2.0,
90 max_nn=30))
91 target_down.estimate_normals(
92 o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
93 2.0,
94 max_nn=30))
95 if config["icp_method"] == "point_to_plane":
96 result_icp = o3d.pipelines.registration.registration_icp(
97 source_down, target_down, distance_threshold,
98 current_transformation,
99 o3d.pipelines.registration.
100 TransformationEstimationPointToPlane(),
101 o3d.pipelines.registration.ICPConvergenceCriteria(
102 max_iteration=iter))
103 if config["icp_method"] == "color":
104 # Colored ICP is sensitive to threshold.
105 # Fallback to preset distance threshold that works better.
106 # TODO: make it adjustable in the upgraded system.
107 result_icp = o3d.pipelines.registration.registration_colored_icp(
108 source_down, target_down, voxel_size[scale],
109 current_transformation,
110 o3d.pipelines.registration.
111 TransformationEstimationForColoredICP(),
112 o3d.pipelines.registration.ICPConvergenceCriteria(
113 relative_fitness=1e-6,
114 relative_rmse=1e-6,
115 max_iteration=iter))
116 if config["icp_method"] == "generalized":
117 result_icp = o3d.pipelines.registration.registration_generalized_icp(
118 source_down, target_down, distance_threshold,
119 current_transformation,
120 o3d.pipelines.registration.
121 TransformationEstimationForGeneralizedICP(),
122 o3d.pipelines.registration.ICPConvergenceCriteria(
123 relative_fitness=1e-6,
124 relative_rmse=1e-6,
125 max_iteration=iter))
126 current_transformation = result_icp.transformation
127 if i == len(max_iter) - 1:
128 information_matrix = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
129 source_down, target_down, voxel_size[scale] * 1.4,
130 result_icp.transformation)
131
132 if config["debug_mode"]:
133 draw_registration_result_original_color(source, target,
134 result_icp.transformation)
135 return (result_icp.transformation, information_matrix)
136
Two options are given for the fine-grained registration. The color
option is
recommended since it uses color information to prevent drift. See [Park2017]
for details.
Multiway registration
40# examples/python/reconstruction_system/refine_registration.py
41
42def update_posegraph_for_scene(s, t, transformation, information, odometry,
43 pose_graph):
44 if t == s + 1: # odometry case
45 odometry = np.dot(transformation, odometry)
46 odometry_inv = np.linalg.inv(odometry)
47 pose_graph.nodes.append(
48 o3d.pipelines.registration.PoseGraphNode(odometry_inv))
49 pose_graph.edges.append(
50 o3d.pipelines.registration.PoseGraphEdge(s,
51 t,
52 transformation,
53 information,
54 uncertain=False))
55 else: # loop closure case
56 pose_graph.edges.append(
57 o3d.pipelines.registration.PoseGraphEdge(s,
58 t,
59 transformation,
60 information,
61 uncertain=True))
62 return (odometry, pose_graph)
63
This script uses the technique demonstrated in /tutorial/pipelines/multiway_registration.ipynb. Function update_posegraph_for_scene
builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.
Once a pose graph is built, function optimize_posegraph_for_scene
is called
for multiway registration.
63# examples/python/reconstruction_system/optimize_posegraph.py
64
65def optimize_posegraph_for_scene(path_dataset, config):
66 pose_graph_name = join(path_dataset, config["template_global_posegraph"])
67 pose_graph_optimized_name = join(
68 path_dataset, config["template_global_posegraph_optimized"])
69 run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
70 max_correspondence_distance = config["voxel_size"] * 1.4,
71 preference_loop_closure = \
72 config["preference_loop_closure_registration"])
73
Main registration loop
- The function
make_posegraph_for_refined_scene
below calls all the functions introduced above.
173# examples/python/reconstruction_system/refine_registration.py
174def make_posegraph_for_refined_scene(ply_file_names, config):
175 pose_graph = o3d.io.read_pose_graph(
176 join(config["path_dataset"],
177 config["template_global_posegraph_optimized"]))
178
179 n_files = len(ply_file_names)
180 matching_results = {}
181 for edge in pose_graph.edges:
182 s = edge.source_node_id
183 t = edge.target_node_id
184 matching_results[s * n_files + t] = \
185 matching_result(s, t, edge.transformation)
186
187 if config["python_multi_threading"] == True:
188 from joblib import Parallel, delayed
189 import multiprocessing
190 import subprocess
191 MAX_THREAD = min(multiprocessing.cpu_count(),
192 max(len(pose_graph.edges), 1))
193 results = Parallel(n_jobs=MAX_THREAD)(
194 delayed(register_point_cloud_pair)(
195 ply_file_names, matching_results[r].s, matching_results[r].t,
196 matching_results[r].transformation, config)
197 for r in matching_results)
198 for i, r in enumerate(matching_results):
199 matching_results[r].transformation = results[i][0]
200 matching_results[r].information = results[i][1]
201 else:
202 for r in matching_results:
203 (matching_results[r].transformation,
204 matching_results[r].information) = \
205 register_point_cloud_pair(ply_file_names,
206 matching_results[r].s, matching_results[r].t,
207 matching_results[r].transformation, config)
208
209 pose_graph_new = o3d.pipelines.registration.PoseGraph()
210 odometry = np.identity(4)
211 pose_graph_new.nodes.append(
212 o3d.pipelines.registration.PoseGraphNode(odometry))
213 for r in matching_results:
214 (odometry, pose_graph_new) = update_posegraph_for_scene(
215 matching_results[r].s, matching_results[r].t,
216 matching_results[r].transformation, matching_results[r].information,
217 odometry, pose_graph_new)
218 print(pose_graph_new)
219 o3d.io.write_pose_graph(
220 join(config["path_dataset"], config["template_refined_posegraph"]),
221 pose_graph_new)
222
223
The main workflow is: pairwise local refinement -> multiway registration.
Results
The pose graph optimization outputs the following messages:
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial ] residual : 1.208286e+04, lambda : 1.706306e+01
[Iteration 00] residual : 2.410383e+03, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 8.127856e+01, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 8.031355e+01, valid edges : 22, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.001 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial ] residual : 8.031355e+01, lambda : 1.716826e+01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0
There are 14 fragments and 52 valid matching pairs between fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.