PositionEstimation¶
- class sonic.PositionEstimation¶
This software is made available under the MIT license. See SONIC/LICENSE for details.
- Method Summary
- static CRACov(limbPts, rc_KM, radii_P, cam, att_P2C, cov_x)¶
Calculate the covariance of the maximum likelihood estimate of the camera position from horizon based position estimation
- Inputs:
- limbPts (1x1 sonic.Points2): Image plane coordinates of
the limb
- rc_KM (1x1 sonic.Points3): The estimated position of the
celestial body in the camera frame in kilometers
- radii_P (3x1 double): Celestial body’s principle axis radii
as described in the planet’s principle axis frame
cam (1x1 sonic.Camera): Camera object
- att_P2C (1x1 sonic.Attitude): Attitude from the planetary
axis frame to the camera frame
cov_x (1x1 double): variance in horizon pixel
localization
- Outputs:
P (3x3 double): Position estimation covariance associated with rc
Last revised: 1/8/25 Last author: Ava Thrasher
- static parametricCovCRA(rEst_KM, Rp_KM, camObj, esun, n, cov_x, thetaMax_RAD)¶
Computes the parametric covariance for a position estimate from horizon based position estimation. ONLY VALID FOR SPHERICAL OBSERVED BODY ASSUMPTION. Inputs:
- rEst_KM (1x1 sonic.Points3): vector from spacecraft to
target in the camera frame
Rp_KM (1x1 double): radius of spherical planet
camObj (1x1 sonic.Camera): camera object
- esun (1x1 sonic.Points3): vector from target to
sun expressed in camera frame
n (1x1 double): number of observed horizon points
cov_x (1x1 double): variance of geometric
distance between observed edge points and best fit ellipse - thetaMax_RAD (1x1 double): limb observation half angle
- Outputs:
- P (3x3 double): covariance matrix expressed in the camera
frame
- static triangulate(x_C, p_I, attitudes, method, cov_x, computeCov)¶
Triangulate using the specified method.
- Inputs:
x_C (sonic.Points2): image_plane coordinates (equivalent of K^-1 * pixel coordinate)
p_I (sonic.Points3): respective 3D position objects that are used to triangulate (resection) OR position of the respective cameras (intersection)
- attitudes
(nx1 sonic.Attitude): attitudes of the cameras
OR
(1x1 sonic.Attitude): then all cameras have the same attitude
method (string): “midpoint”, “DLT”, or “LOST”
- cov_x (OPTIONAL)
DEFAULT: unit variance
(1x1 double): isotropic variance, same for all measurements
(1xn double): isotropic variance of points2
(2x2xn double): NOT YET IMPLEMENTED. Isotropic variance MUST be assumed.
computeCov (1x1 logical) (OPTIONAL): flag to decide whether to compute covariance or not. This will make the triangulation function slightly slower, especially for DLT and midpoint. Defaults to false.
- Outputs:
rI (sonic.Points3): triangulated 3D point in inertial
cov_r (3x3 double): 3D covariance of the triangulated point
- References:
[1] S. Henry and J. A. Christian. Absolute Triangulation
Algorithms for Space Exploration. JGCD (2023). https://doi.org/10.2514/1.G006989
Last revised: 4/26/24 Last author: Sebastien Henry
- static withConics(conics, matchedConics_E, attItoC, conicPose, method)¶
Perform least-squares positioning using image-plane conics, matched with the observed conics in their local frames, E Section 9.1 of “Lunar Crater ID…” paper This method assumes that conic association has already been performed and the image-plane conics from the image are each associated with the reference conics defined in their local frames, matchedConics_E
Inputs: - conics (Nx1 sonic.Conic): array of image-plane conics - attItoC (1x1 sonic.Attitude): attitude from the reference
frame of interest I, wrt which we want to position ourselves, to the camera frame C. The reference frame can be any frame (inertial, body-fixed).
- matchedConics_E (Nx1 sonic.Conic): array of matched conics in
their local frames, E, (e.g., could be a local ENU frame)
- attConicsEtoI_arr (Nx1 sonic.Attitude): array of attiude
objects which represent the attitude transformation of each conic from their respective local frames to the reference frame of interest I
- conicCenters_I (1x1 sonic.Points3): center points of each
conic in the reference frame of interest I
- method (1xn string): String indicating method of position
estimation using conics. Supported methods below: - “leastsquares”
Outputs: - posI (1x1 sonic.Points3): solved least-squares position
solution for least-squares conic positioning
Last revised: Nov 21, 2024 Last author: Tara Mina
- static withHorizonPts(limbPts, radii_P, att_P2C, cam, cov_x, computeCov)¶
Performs horizon based optical navigation using the Christian-Robinson algorithm as detailed in “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination With Space Imaging Systems”
Reference DOI: 10.1109/ACCESS.2021.3051914
- Inputs:
limbPts (1x1 sonic.Points2): Image plane coordinates of the limb
radii_P (3x1 double): Celestial body’s principle ai radii as described in the planet’s principle axis frame
att_P2C (1x1 sonic.Attitude): Attitude transformation from the celestial body’s principle axis frame to the camera frame
cam (1x1 sonic.)
- cov_x (OPTIONAL)
DEFAULT: unit variance
(1x1 double): isotropic variance, same for all measurements
computeCov (1x1 logical) (OPTIONAL): flag to decide whether to compute covariance or not. This will make the triangulation function slightly slower, especially for DLT and midpoint. Defaults to false.
- Outputs:
- r_C (1x1 sonic.Points3): vector from the camera to the
center of the celestial body expressed in the camera frame
- cov_r (3x3 double): 3D covariance of the estimated
celestial body position
Last revised: 3/27/24 Last author: Ava Thrasher