MultiProjector Calibration Based on Virtual Viewing Space
Abstract
In largescale maritime maneuvering simulators, the multichannel systems are usually used to tile multiple projectors on a cylindrical display to create a large virtual environment, which brings more realistic immersion to the users. In the nonplanar multiprojector display system, to finish the work of geometric and photometric corrections, there exist the problems of difficult projector calibration, more manual intervention and poor adaptability. In order to solve these problems, a multiprojector calibration based on virtual viewing space is proposed in this paper. First, a virtual environment of multiple projectors is created, and by using the structured light and reprojection model on a cylindrical wall, the calibration of internal and external parameters of a projector are realized separately. Finally, the geometric correction is completed by the mapping relationship between the target images and the calibrated reprojected images. This method is free of limitation by the angle of view of a single camera and can calibrate any of the projectors in the multichannel systems. The example results show that the projector’s parameters can be estimated more accurately, and the efficiency and accuracy of geometric correction can be improved. Good results are achieved when applying this method in the maritime maneuvering simulator system.
Introduction
Highresolution and widefield displays create an immersive visual experience that facilitates applications such as remote collaboration, scientific visualization, and humancomputer interaction (Yicheng and Yong, 2013; Yiyu et al., 2013). Multiprojector display walls are an economical and effective way to build large seamless displays today. In the maritime maneuvering simulator, the multiprojector display wall often adopts a 180degree or even 360degree cylindrically nonplanar display, and the multichannel visual simulation system is used to project a completely virtual environment image onto the curved display wall, which gives the user a more realistic immersion (Yicheng and Yong, 2013). In this system, geometric and photometric corrections are the key technologies, and geometric correction is the premise and foundation of the display system, which directly affects the display results.
Images projected directly from the projector to the curved surface are distorted. In order to construct a multichannel visual simulation system to display an undistorted image on a curved display wall, it is necessary to predeform the projected image, which is called a geometric correction. The reason why the multichannel system needs geometric correction is due to the difference between the parameters of the projector and the view frustum of a virtual camera. The key of geometric correction is to establish an accurate correspondence between the projector image and the display wall (Chao et al., 2013; Fang, 2013).
This paper proposes a multiprojector calibration based on virtual viewing space (VVS) for the cylindrical display wall in a maritime simulator. The characteristics are as follows:
(1) The projection and reprojection of the actual projector and virtual camera are modeled, and the mapping relationship between VVS and the actual viewing space (AVS) is realized.
(2) In the calibration process, the actual projector is modeled by the pinhole model, the internal parameters of the projector are calibrated by the structured light method, and the external parameters are calibrated by the reprojected image on VVS.
(3) The geometric correction is completed by the mapping relationship between the calibrated reprojected image of an actual projector and the target image of a virtual camera in the multichannel visual simulation system.
Related word
At present, geometric correction technology often uses a camera as an auxiliary device and cooperates with the projector to perform it. Raskar et al. (1999) proposed a planar geometric correction system using a single camera to achieve geometric correction of a multichannel system. However, it used a single camera to obtain a limited range of projected images, which limits, to some extent, the scalability of this system. Bhasker et al. (2007) achieved planar projection distortion correction (i.e. radial and tangential distortions), using rational Bezier patches. Chen et al. (2002) used multiple cameras on planar displays to achieve a homographytree based registration across multiple projectors, but this increased the complexity of the system and generated cumulative geometric correction errors.
In nonplanar displays, geometric correction is relatively more complicated, and the correction process is more complicated. Brown and Seales (2002) used a single camera to acquire equidistant feature points projected by each projector to establish a onetoone correspondence between projector and camera spatial feature points, organized these feature points into a triangular mesh, and then used texture deformation to achieve geometric correction of arbitrarily curved displays. Raskar et al. (2002, 2004, 2005) used the curved displays as a parametric model of quadric surfaces (such as domes and cylinders) to achieve geometric correction of them. Harville et al. (2006) and Sun et al. (2008) pasted artificial marker points on the upper and lower edges of the curved display and used the twodimensional linear mapping to establish the relationship between the projector and the display. However, since there is no spatial threedimensional information, it cannot be used for viewpointrelated applications. Using a single camera, Sajadi et al. (2009) calibrated the internal and external parameters of the projector by extracting the upper and lower edges of the curved display to complete the nonplanar geometric correction. The limitation is that it can only be used on smooth vertical surfaces, and the curve display cannot have sharply changing edges and corners. Later, Sajadi et al. (2010, 2011) improved its algorithm to extend it to all vertical extension surfaces. Xiao Chao et al. (2013) used the structured light to calculate the mapping relationship between each projector and the display wall to achieve nonlinear geometric correction. Fang (2013) calibrated the projector in an interactive way, and then combines the channel view frustum to realize the multichannel geometric correction. Dong et al. (2015) used structured light to realize the automatic splicing of the cylindrical projection image, and corrected the errors caused by manufacturing and arranging of the display. However, limited by the angle of view of a single camera, it can only calibrate 4 to 6 projectors in a projection display.
The above methods show that the mapping relationship between the projectors and the display is the key of the geometric correction in the multichannel systems. Compared with the actual application, the following problems still exist in the existing work:
(1) To the lack of projection matrix of the projector, the geometric correction is simply performed by using rational Bezier patches to require a large amount of manual intervention (commercial software generally adopts this method).
(2) In the automatic calibration of the projector, the reference point is often placed on the display wall to increase the complexity.
(3) When calibrating multiprojection, the number of calibration projectors is often limited by the singlecamera field of view.
The new algorithm proposed in this paper constructs the VVS and moves the geometric correction process of AVS into VVS. It can calibrate the projectors separately and calculate the mapping relationship between the viewing image and the target image to complete the multichannel geometric correction. It can improve the efficiency and application range of multichannel geometry correction.
Virtual viewing space
In a multichannel visual simulation system, it is often necessary to display a large viewing angle scene on a large screen to provide an immersive environment. This environment is mainly composed of a physical screen, the projectors, a frame buffer image, and a reprojection image. Among them, a frame buffer image refers to the image formed by a virtual camera in the virtual environment, and a reprojected image refers to the image formed by reprojecting a frame buffer image onto the screen.
Figure 1: A multichannel visual simulation system
As shown in Figure1, the cylinder simulation system maps the multichannel frame buffer images centered on a certain viewpoint in the virtual scene on the curved wall, so that people can view the continuous virtual pictures. However, since the viewpoint reference center is different from the projection centers of actual projectors, eventually, the viewing images in the channels and the overlapping area of the channels will be geometrically deformed.
This paper establishes a virtual viewing space that is consistent with the actual one, including establishing projection and reprojection models for the actual projector and virtual camera, and realizes the mapping of VVS to AVS for geometric correction in multichannel visual simulation systems.
Projection model
According to computer vision, the camera can be modeled by the pinhole. The world coordinate system is converted into the camera coordinate, and the following transformations are to be completed (Bradski and Kaebler, 2009).
$\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ {z}_{c}\end{array}\right]=\left[RT\right]\left[\begin{array}{c}{x}_{w}\\ {y}_{w}\\ {z}_{w}\\ 1\end{array}\right]\left(1\right)$
${z}_{c}\left[\begin{array}{c}u\\ \begin{array}{c}v\\ 1\end{array}\end{array}\right]=\begin{array}{c}\left[\begin{array}{ccc}{f}_{u}& \lambda & {u}_{0}\\ 0& {f}_{v}& {v}_{0}\\ 0& 0& 1\end{array}\right]\\ K\end{array}\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ {z}_{c}\end{array}\right]$ (2)
R is 3x3 rotation matrix, and T is 3 x 1 translation vector. ${f}_{u}=f/dx,{f}_{v}=f/dy,\lambda =\lambda \text{'}f$, f is the focal length of the camera, while dx and dy are the physical size of the horizontal and vertical pixels, with $\lambda $ as the tilt factor. [R, T] is an external parameter matrix, which shows the transformation of a world coordinate system with the origin at the camera center, and K is the internal parameter matrix, while $\left({u}_{0},{v}_{0}\right)$is the coordinates of the main point. A camera projection matrix is the $K\left[R,T\right]$.
In VVS, to simplify the cropping operation of the scene,the internal parameter matrix of the virtual camera adopts a normalized projection matrix K_{C}, that is, the tilt factor ${\lambda}^{\text{'}}=0$, and $\left({u}_{0},{v}_{0}\right)$ is (0 , 0). The projection model of an actual projector can be described as the inverse projection of the camera, and be modeled using a pinhole, which also is composed of an internal and external parameter matrix (Moreno and Taubin, 2012).
As shown in Figure 2, the projection model of the virtual camera refers to the view frustum of the vertical field of view $2\alpha $ with the viewing center O_{C}. The frame buffer image plane of A_{C}B_{C}C_{C}D_{C} is obtained by the transformation of perspective projection. Similarly, a projector can be described as the inverse of a camera. In VVS, the projection matrix of the projector is calculated according to the internal and external parameters of the actual one, and then the frame buffer image is projected into the threedimensional space through its center, and the projection model of the actual projector refers to the view frustum formed at this time.
Figure2: Projection and reprojection models of a virtual camera and an actual projector
Reprojection model
In perspective projection, it is called a reprojection transformation, as the original image is projected onto another surface. Geometric correction of the projected image is required for the reprojection. This paper will continue to establish reprojection models based on the display in VVS, including the virtual camera and the actual projector reprojection models.
As shown in Figure 2, the virtual camera reprojection model means that the frame buffer image A_{C}B_{C}C_{C}D_{C} by virtual camera reprojects onto the screen EFGH with the view frustum of the viewing center O_{C}, and finally forms the target image E_{C}F_{C}G_{C}H_{C}. Similarly, the reprojection model of projector means that the frame buffer image A_{C}B_{C}C_{C}D_{C} by actual projector reprojects onto the screen EFGH with the view frustum of the projection center, and finally forms the viewing image E_{P}F_{P}G_{P}H_{P}.
In multichannel systems, the reprojection image of a virtual camera is the target image on the display formed by ideal viewing point and projection parameters, which is the result that people want to view from the screen. Due to the different parameters of the internal and external projection matrix, the reprojected image of an actual projector (the viewing image) is obviously deviated from the target image and required geometric correction. In Figure 2, the two reprojected images will form an intersection area. The purpose of geometric correction in multichannel systems is that the viewing image shows the corresponding content of the target image in the intersection area but does not include that outside the intersection area. The geometric correction is performed by the reprojection model to make the projector projecting the image that people wants.
Virtual viewing space calibration
VVS is a mapping of AVS. It can correctly establish the above projection and reprojection models of the actual projector and virtual camera for geometric correction in multichannel systems.
Projection screen construction
In actual space, projection screens tend to have regular geometric shapes and are located at corresponding locations in actual space. According to the geometry of the screen, the virtual scene viewing point is used as the center point to construct the shape and position of the screen in VVS.
In largescale maritime simulators, the projection screen is often cylindrical. As shown in Figure1, through actual measurements, the width ($w$), height ($h$) and depth ($d$) of the screen can be obtained. By equations (3) and (4), the cylinder radius ($R$) and the angle ($\theta $) of view can be calculated. And the screen can be constructed in the virtual viewing space with the three parameters $R,\theta $ and $h$.
$R=\left(4{d}^{2}+{w}^{2}\right)/8d\left(3\right)\theta =2\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{i}\mathrm{n}(4wd/{4d}^{2}+{w}^{2}\left)\right(4)$
Virtual camera calibration
The internal and external parameters of the virtual camera can be obtained from the virtual camera position, pose and perspective matrix, which is known in the multichannel visual simulation system. Through the mapping relationship between VVS and AVS, the geometric center of VVS is used as the virtual camera viewing point, and this point is also the reprojection center point of the virtual camera.
Actual projector calibration
In VVS, the key is to calibrate the actual projector, establish the mapping between the projector and the wall, and the mapping between a viewing image and a target image. There are fewer methods for projector calibration, which has different shortcomings of lack of accuracy, poor antiinterference ability, or difficulty in calibration (Gao et al., 2008; Martynov et al., 2011; Drar´eni et al., 2012). In this paper, VVS is used for multiprojector calibration and the projector is modeled according to Section 3.1, including internal parameters and external parameters.
Internal Parameter Calibration
Figure 3: A projector and a camera pixel matching based on a structured light
The projector is modeled using a pinhole and can be calibrated using a camera calibration method (Zhang, 2000). However, since the projector can't take photos, we use the projector to project the structured light to the checkerboard at different positions and calibrate the projector through the structured light images obtained by the camera (Moreno and Taubin, 2012). As shown in Figure 3, by decomposing a photograph with a structured light sequence, the correspondence between the points in the camera photo and in the plane of the projector is found. As shown in equation (5), a local homography $\widehat{H}$ is calculated for each inner corner of the checkerboard in the photo. Then use equation (7) to convert the inner corner $v(x,y,0)$ of the checkerboard from $\overline{p}$ in the camera imaging plane to $\overline{q}$ in the projector imaging plane.
$\widehat{H}=\underset{H}{\mathrm{arg}\mathrm{min}}\sum _{\forall p}{\Vert qHp\Vert}^{2}\left(5\right)$
$H\in {R}^{3x3},p={\left[u,v,1\right]}^{T},q={\left[i,j,1\right]}^{T}\left(6\right)$
$\stackrel{}{q}=\widehat{H}.\stackrel{}{p}\left(7\right)$
Therefore, the correspondence between the inner corner points in the checkerboard and in the plane of the projector is obtained. Using the formulas (1) and (2), the internal parameter matrix ${K}_{P}$ in the calibration projector can be obtained. Then, we get the rotation matrix $R$ and translation vector $T$ of the projector, which is relative to the checkerboard coordinate system. We need to further determine the external parameter matrix of the calibration projector, which is relative to the projection screen.
External parameter calibration
Figure 4: Calibrating the external parameters of projector
We obtained the parameter matrix ${K}_{P}$ in the projector, including the projector's center point $({u}_{0},{v}_{0})$, ${f}_{u}$ and ${f}_{v}$. Since the width ($w$) and height ($v$) of the projected image are known, we can calculate the vertical field of view by equation (8). As shown in Figure 4, we can decide the view frustum of the projector defined by its center and five planes (top, bottom, left, right and the image plane).
$\mathrm{cot}\left(\frac{Vfov}{2}\right)=\frac{({f}_{u}+{f}_{v})/2}{h/2}$
(8)
After obtaining the view frustum of the projector, in VVS, the external parameters of the projector are calculated by using the reprojected image on the screen.
In Figure4, first, the reprojected grid image ${E}_{P}{F}_{P}{G}_{P}{H}_{P}$ of the projector taken by the camera in AVS is mapped to the corresponding position on the virtual curved display wall in VVS. Then, in VVS, we select the 3D curve ${E}_{P}{F}_{P}$and ${G}_{P}{H}_{P}$ of the reprojected grid image and fit a plane to the samples of ${E}_{P}{F}_{P}$ and ${G}_{P}{H}_{P}$ in a linear least squares sense to estimate $T$ and $B$ plane. Then we find the intersection of $T$ and $B$ to find the line ${L}_{0}$, and the center of the projector is on this line. Finally, the center of the projector model in VVS moves on the line ${L}_{0}$, so that the image formed by the view frustum of the projector model on the virtual curved display wall coincides with the reprojected image ${E}_{P}{F}_{P}{G}_{P}{H}_{P}$. At this time, the pose of the projector in VVS is consistent with the pose of the projector in AVS. From the pose, the external parameters of the actual projector are obtained.
It can be seen that the use of the virtual viewing space for projector calibration is not limited by the camera's field of view angle, and can realize the calibration of any projector in the actual space individually, and has the characteristics of practicability and versatility.
Geometric Correction Steps
To sum up, the geometric correction based on VVS is divided into the following steps:
(1) By measuring the dimensions of the cylinder, the position and shape of the cylinder are modeled in VVS.
(2) The camera is used to obtain the checkerboard image with structured light, and the mapping between the pixels of the actual projector and the corner points of the checkerboard is established. The internal parameters of the calibration projector are determined, and the view frustum of the projector is constructed in VVS.
(3) The reprojected grid image of the calibration projector in AVS is mapped to the corresponding position of the display wall in VVS.
(4) Comparing the reprojection images between in AVS and in VVS, the external parameter matrix of the calibration projector is determined when the two are matched.
(5) Comparing the reprojection image of the projector on the wall and the target image of the virtual camera, the mapping between the two images is established, and the geometric correction of the multichannel system is realized by the mapping.
In addition, we manually use rational Bezier patches for further adjustment to eliminate the errors between AVS and VVS include screen model, projection and reprojection model, and achieve more precise geometric correction.
Results and applications
We use the maritime simulator with 5 channels to do the experiment. The projection screen is a cylinder wall with a radius ($R$) of 8.42 meters, a height ($h$) of 3.0 meters, the horizontal angle ($\theta $) of 180 degrees.
This experiment comprises five SECOAPD700L projectors and a Canon EOS 6D camera. Projectors’ resolution is 1024*768, and the camera’s resolution is 2736*1824. The internal parameters matrix of the projector is calibrated by using Moreno’s method (Moreno and Taubin, 2012).
Figure 5: The experimental diagram for calibrating internal parameters of projector
(a) a completely illuminated image, (b) a projected gray code, (c) the pixels to projector column, (d) the pixels to the projector row
Single channel calibration
We use the structured light to calibrate the internal parameters of a projector separately. The following is an example of calibrating the intermediate channel projector of this system. As shown in Figure 5 (a), the intermediate channel projector projects a checkerboard illuminated by white light, and the obtained image is used for calibration of the camera. In Figure 5 (b), the intermediate channel projector sequentially projects the code of the gray code, and the corresponding checkerboard picture is obtained by the camera with the structure of the gray code. It needs to generate 210=1024 different codes to meet the projector resolution requirements. According to the literature [19], a total of 40 pictures are required. These pictures are then decoded to obtain the distribution of the pixels of the projected image. As shown in Figure 5(c) and (d), respectively, the pixel vertical and horizontal coordinates of the projected image are distributed, and the pixel coordinates of the same color are equal. At the same time, use formula (5) to convert all the corner points in the checkerboard into projector coordinates according to Zhang’s method (Zhang, 2000).
$\left\{\begin{array}{c}{\alpha}_{x}={\mathrm{tan}}^{1}\left(\frac{{O}_{x}{I}_{x}}{f}\right)\\ {\alpha}_{y}={\mathrm{tan}}^{1}\left(\frac{{O}_{y}{I}_{y}}{f}\right)\end{array}\right.\left(9\right)$
Table 1 shows the results of internal parameters for the projector, which are the focal length $({f}_{u},{f}_{v})$and the optical center $({O}_{X},{O}_{Y})$. In this paper, $({f}_{u},{f}_{v})/2$ is taken as the focal length $f$of the projector to calculate the parameters of its view frustum, where its vertical field angle is calculated according to formula (8), and its optical center shift angle is calculate as shown in equation (9). $({I}_{X},{I}_{Y})$ is the center of the projected image.
Table 1 Results of calibration of internal parameters
Name 
Intermediate channel projector 


Calibration result 
Focal length (pixel) 
(761.65，769.15) 
optical center (pixel) 
(523.63，477.41) 

Conversion result 
Vertical field of view (degree) 
53.29 
Optical center shift (degree) 
(0.87，6.93) 
Figure 6: the reprojection image of projector in AVS mapped to the virtual display wall in VVS
(a) a cross checkered pattern, (b) Reprojected image, (c) Mapping process, (d) Mapping results
When calibrating the external parameters of the projector, in order to facilitate the matching of the reprojection images in VVS and AVS, as shown in Figure 6(a), the projected image adopts a cross checkered pattern. This pattern can also serve as the remedy for the situation when the projected image cannot be fully projected onto the cylinder display wall, which leads to the failure of calibration (Sajadi and Majumder, 2010). We use the camera to obtain a reprojected image of the intermediate channel projector, as shown in Figure 6(b). Figure 6(c) shows the parameterization of the captured picture and maps the red frame area of Figure 6(b) to the corresponding position of the virtual screen. As shown in Figure 6(d), it completes the viewing image mapping in the two spaces, which provides a reference image for calibrating the external parameters of the actual projector.
After determining the reference image in VVS, as shown in Figure 7(a), first, the view frustum of the projector is determined by the internal parameters that have been calibrated. In Figure 7(b), the upper and lower parallel lines provided by the cross checkered pattern on the screen are used to find the line where the center of the projector is located. At the same time, the projector model simulates actual projector to project a cross checkered pattern onto a virtual screen. As shown in Figure 7(c), if the reprojected image of the projector deviates from the reference image, the position of the projector is manually finetuned to match the two. As shown in Figure 7(d), the final result with the reprojected image and the reference image matched determines the external parameters to realize the whole process of projector calibration. Table 2 shows the results of the calibration of the external parameters of the intermediate channel projector in the system.
Figure 7: Calibrate external parameters of actual projector
(a) the view frustum, (b) initial position, (c) the two images deviates, (d) the two images matched
Table 2 Results of calibration of external parameters
Name 
Intermediate channel projector 


Calibration result 
Translation vector (m) 
(0.0482，1.0907，3.6416) 
Rotation vector (degree) 
(0.0，0.0，0.4) 
MultiChannel calibration
Figure 8: Fivechannel projector calibration
(a) Fivechannel reprojected image with cross checkered pattern (b) Reprojection image mapping to virtual display wall (c) The result of fivechannel projector calibration (d) Grid display on the wall before calibration (e) Grid display on the wall after calibration (f) Grid display on the wall adjusting by rational Bezier patches
According to the calibration of the internal and external parameters, we have calibrated the five channel projectors separately. Figure 8 (a) is a fivechannel viewing image with a cross checkered pattern taken by a camera. Figure 8 (b) is a mapping result of the fivechannel viewing image from AVS to VVS, which corresponds to Figure 8(a). Figure 8(c) shows the results of the reprojection image of the fivechannel projector in agreement with the reference image, thereby realizing the calibration of the internal and external parameters of the fivechannel projector. Figure 8(d) shows the results of the fivechannel projection grid to the cylinder display wall without calibration. It can be seen that the grids on the projection wall are obviously not aligned, and the grids are not completely displayed on the screen. Figure 8(e) shows the results of the fivechannel projection grids on the screen after calibration. It can be seen that the grids on the projection screen are basically aligned, and the entire projection screen is fully utilized with the full grids. In Figure 8(e), the grids of the channel 1 and channel 2 overlapping area are almost perfectly aligned, but these of the channel 2 and channel 3 overlapping area are not well aligned. It is caused by the projection distortion of the projector, the errors of the parametric projection screen and the mapping reprojected image. These errors can lead to inaccurate geometric corrections. Further manual adjustments can be made by using rational Bezier patches for more precise geometric correction. As shown in Figure 8(f), after adjusting by rational Bezier patches, the overlapping area grids are completely aligned.
Figure 9: Fivechannel geometric correction
(a) virtual camera reprojection images (b) the viewing image and target image mapping (c) Overall rendering of the fivechannel
Figure 9(a) is the reprojected images of the five virtual cameras on the projection screen, referred to as a target image. Figure 9(b) shows the viewing images of the calibrated projectors and the target images of the virtual cameras and establishes the mapping between the cameras and projectors. The viewing image only displays the corresponding content of the target image in the intersection area, without displaying that outside the intersection area. The effect of forming a complete virtual scene is achieved, and finally, the geometric correction of the multichannel visual simulation system is realized.
After the geometric correction, photometric correction is required to obtain a uniform brightness projection effect. This paper focuses on geometric correction and uses the method of literature (Yang et al., 2001) for photometric correction. Figure 9(c) is a whole rendering of the fivechannel maritime simulator through geometry and photometric correction.
Applications
The visual simulation system with 5 channels is implemented by our method and applied to a maritime maneuvering simulator. It can project a completely virtual environment image onto the cylinder wall, which gives the user a more realistic immersion. In Figure 10(a), the system carries out ship maneuvering and navigation training through the rudder, the electronic chart, and the navigation radar in the virtual bridge.
In addition, we perform geometric correction based on VVS, and it can establish a 3D mapping between the projector and the display wall, and render the virtual scene according to the actual wall. There is a onetoone correspondence between VVS and AVS. In Figure 10(b), Using this correspondence, we can place a bearing repeater in the center of the display wall, and directly measure the bearing of the virtual object for navigation training.
Figure10: Applications for maritime maneuvering simulator
(a) Ship training in the virtual bridge (b) A bearing repeater for navigation training
EVALUATION
We use the VVS method to perform geometric correction on the 5channel the cylinder wall, and then analyze and discuss the accuracy and consumption time of the geometric correction.
Projector Calibration Accuracy
The accuracy of projector calibration is the key to geometric correction, directly affecting its accuracy and efficiency. In our system, the height of the projector's optical center is convenient for an actual measurement. The actual measured height value is compared with the projector calibration value, which can reflect the accuracy of our method for projector calibration. The results are shown in Table 3. The errors are below 2%.
Table 3 the heights of comparison results
Projector number 
1 
2 
3 
4 
5 

Measuring height (m) 
1.059 
1.068 
1.072 
1.053 
1.046 
Calibration height (m) 
1.047 
1.057 
1.091 
1.043 
1.029 
error 
0.012 
0.011 
0.019 
0.010 
0.017 
ratio 
1.1% 
1.0% 
1.7% 
0.9% 
1.6% 
Geometric Correction Efficiency
In the whole geometric correction process, it is divided into four main steps including internal parameter calibration of the projector, the construction of VVS, external parameter calibration of the projector, and rational Bezier patches finetuning. For this 5channel geometric correction, each step takes approximately as shown in Table 4.
In Table 4, the 5channel geometry correction takes about 57 minutes, which is mostly done automatically by the computer, and the manual timeconsuming only accounts for about 26 % of the entire process.
Table 4 5channel geometry correction timeconsuming results
Step 
Way 
Time (minutes) 
percent 

internal parameter calibration 
automatic 
7*5（channel）=35 
62% 
construction of VVS, 
automatic 
5 
8% 
external parameter calibration 
automatic + manual 
7 
12% 
rational Bezier patches finetuning 
manual 
10 
18% 
Discussion
We use VVS for geometric correction. Compared with the real space, we can quickly construct the display wall and the projector model in VVS, which is convenient to establish a 3D mapping between the projector and the screen, and improve the efficiency of geometric correction and the scope of application.
In the mapping, we turn the wall of AVS into VVS, which makes the construction and control of the wall more convenient. It avoids marking points on the physical wall and reduces the complexity of the wall compared with the literature (Chao et al., 2013; Fang, 2013; Harville et al., 2006; Sun et al., 2008).
In the multiprojector calibration, the internal and external parameters of the projector are corrected in two steps, and the calibration of each projector is performed independently in the VVS. Compared with the literature (Sajadi et al., 2009; Li Dong et al., 2015), the number of calibration projectors is not limited by the angle of view of a single camera, and it is convenient to realize the calibration of any projector in a large viewing angle (including 360 degrees) scene. Compared with the literature (Chen et al., 2002), the method of this paper does not bring the cumulative errors of the projector calibration.
In operation mode, as shown in Table 3 and 4, the high calibration accuracy of projector can be obtained by the automatic, and the calibrated projection grids are basically aligned on the wall. Only a small amount of local further manual adjustments by rational Bezier patches are required for more precise geometric correction. Compared with the literature (Raskar et al., 2002; Raskar et al., 2004; Raskar et al., 2005), it has better adaptability to the projector and display wall nonlinearity.
In the applications, the VVS method establishes a 3D mapping between the projector and the screen. As shown in Figure10 (a) and (b), using this correspondence, it can be applied in the virtual environment related to viewpoints such as virtual driving and target positioning. Compared with the literature (Bhasker et al., 2007; Harville et al., 2006; Sun et al., 2008), it has a wider range of applications.
Summary
We propose multiprojector correction in VVS and apply to a 5channel navigation simulator. This method more conveniently makes a 3D mapping between the calibrated projector and the display wall, which can bring many advantages. First, the calibration of each projector is performed independently, and the increase of the projectors does not bring additional correction complexity. Second, the number of projectors is not limited by the angle of view of a single camera. Finally, in terms of time consumption, the calibration process is basically automatic, and a linear superposition in the increase of the calibrated ones.
In addition, the algorithm proposed in this paper can be widely applied. As long as the display wall can be parameterized, it can be used for geometric correction, such as the planes, the domes, and so on.
However, the method also brings the errors when constructing VVS, including the parametric display error, the projector model error, the mapping reprojected image error, etc., which can lead to inaccurate geometric correction. How to reduce or avoid these errors is worthy of further study. In addition, after geometric correction, the second important challenge in a multichannel system is a photometric correction, and how to achieve a better smooth transition of the brightness of the overlap zone will also be one of our future works.
Acknowledgements
This research is sponsored by largescale surface warship integrated maneuvering simulation training system of Chinese Navy Staff.
Notes
References
 Bhasker E., Juang R., Majumder A., 2007. Registration techniques for using imperfect and partially calibrated devices in planar multiprojector displays. IEEE Transactions on Visualization and Computer Graphics, 13(6) :13681375.
 Bradski G. and Kaebler A., 2009. Learning OpenCV. Beijing: Tsinghua University Press. (in Chinese)
 Brown M.S. and Seales W.B., 2002. A practical and flexible tiled display system. Proceedings of the 10th Pacific Conference on Computer Graphics and Applications. Los Alamitos: IEEE Computer Society Press.
 Cai Y., Chia N.K.H., Thalmann D., Kee N.K., Zheng J., Thalmann N.M., 2013. Design and Development of a Virtual Dolphinarium for Children With Autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21(2):208217.
 Chao X., Hongyu Y., Haijun L., Yulong J., Xinsheng L., 2013. Geometric Calibration for Multiprojector Display System Based on Structured Light. Journal of Computer Aided Design & Computer Graphics, 25(6): 802808. (in Chinese)
 Chen H., Sukthankar R., Wallace G., 2002. Scalable Alignment of LargeFormat MultiProjector Displays Using Camera Homography Trees. IEEE Visualization Conference, Boston, United states.
 Drar´eni J., Roy S., Sturm P., 2012. Methods for geometrical video projector calibration. Machine Vision and Applications, 23: 79–89.
 Fang S., 2013. An Interactive Warping Method for Multichannel Visaul Simulation System. Journal of Computer Aided Design & Computer Graphics, 25(9): 13181324. (in Chinese)
 Gao W., Wang L., Hu Z.Y., 2008. Flexible calibration of a portable structured light system through surface plane. Acta Automatica Sinica, 34(11): 1358–1362.
 Harville M., Culbertson B., Sobel I., Gelb D., Fitzhugh A., Tanguay D., 2006. Practical methods for geometric and photometric correction of tiled projector. Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06). Workshops, New York, USA.
 Li D., Xie J., Zhao L., Zhou L., Weng, D., 2015. Multiprojector autocalibration for nonplanar surface. Computer Engineering and Application, 51(7):194203. (in Chinese)
 Martynov I., Kamarainen J.K., Lensu L., 2011, Projector calibration by inverse camera calibration. in Proceedings of the 17th Scandinavian conference on Image analysis, Heidelberg, Berlin.
 Moreno D. and Taubin G., 2012. Simple, Accurate, and Robust ProjectorCamera Calibration. Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization & Transmission. Los Alamitos, CA : IEEE Computer Society.
 Raskar R., Brown M.S., Yang R., 1999. MultiProjector Displays Using CameraBased Registration. IEEE Visualization Conference, Los Alamitos, United States.
 Raskar R., van Baar J., Chai J., 2002. A Low Cost Projector Mosaic with Fast Registration. Proc. 5th Int. Conf. Computer Vision. Melbourne, Australia.
 Raskar R., Baar J., Willwacher T., (2004). Quadric Transfer for Immersive Curved Display. Computer Graphics Forum. 2004, 23(3): 110.
 Raskar R. and van Baar J., 2005. LowCost MultiProjector Curved Screen Displays. Digest of Technical PapersSID Int. Symposium, 36(1): 884887.
 Sajadi B. and Majumder A., 2011. Autocalibrating tiled projectors on piecewise smooth vertically extruded surfaces. Visualization and Computer Graphics, 17(9): 12091222.
 Sajadi B. and Majumder A., 2010. Autocalibration of cylindrical multiprojector systems. Proceedings of Virtual Reality Conference , Boston, USA.
 Sajadi B. and Majumder A., 2009. Markerless view independent registration of multiple distorted projectors on extruded surfaces using an uncalibrated camera. IEEE Transactions on Visualization and Computer Graphics, 15(6): 13071316.
 Sun W., Sobel I., Culbertson B., Gelb D., Robinson I., 2008. Calibrating multiprojector cylindrically curved displays for wallpaper projection. Proceedings of the 5th ACM/IEEE International Workshop on Projector Camera Systems, Marina del ray, USA.
 Yang R., Gotz D., Hensley J, Towles H., Brown M.S., 2001. Pixelflex: A reconfigurable multiprojector display system. Proceedings of Visualization’01, IEEE Computer Society.
 Yicheng J. and Yong Y., 2013. Maritime Simulator. Beijing: Science Press. (in Chinese)
 Zhang Z., 2000. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(11): 1330–1334.
Attachments
No supporting information for this articleArticle statistics
Views: 1846
Downloads
PDF: 206
XML: 2