ISS Speed Analysis Dashboard v2

Advanced Image Processing and Speed Calculation

Version: Loading...

Section 1: Photo Selection
âš ī¸ Delete uploaded folders to free up storage space
Start 0
to
End 9
Processing 10 images (0-9)
-
Section 2: Algorithm Configuration
?
Algorithm Configuration
Configure the computer vision algorithm and processing parameters for ISS speed calculation.
Choose between ORB and SIFT algorithms, enable FLANN matching for faster processing, and configure contrast enhancement.

🔧 Core Algorithm Settings

Choose the feature detection algorithm and matching method

?
Feature Detection Algorithm
ORB: Fast, binary descriptors, good for real-time applications.
SIFT: More accurate, float descriptors, slower but more robust.
ORB is recommended for speed, SIFT for accuracy.
?
FLANN Matching
Fast Library for Approximate Nearest Neighbors - significantly faster matching for large feature sets.
Recommended for SIFT, optional for ORB.
What it does: Enables Fast Library for Approximate Nearest Neighbors (FLANN) matching
Method: Uses optimized algorithms for faster feature matching between images
Logic: Approximates nearest neighbor search instead of exact matching for speed
Example: Reduces matching time from 10s to 2s for large feature sets (1000+ features)
Purpose: Significantly improves processing speed for datasets with many keypoints
?
Maximum Features
Maximum number of keypoints to detect per image. More features = better matching but slower processing.
Recommended: 500-1000 for ORB, 1000-2000 for SIFT.
1000
What it does: Sets the maximum number of keypoints to detect in each image
Method: Limits the feature detector to find only the strongest N keypoints
Logic: Higher values = more features but slower processing, lower values = faster but fewer features
Example: 1000 features = good balance, 500 = faster processing, 2000 = more detail but slower
Purpose: Balances feature detection quality with processing speed and memory usage

đŸ–ŧī¸ Image Processing Settings

Configure image preprocessing to improve feature detection

?
Contrast Enhancement
Preprocessing technique to improve feature detection by enhancing image contrast.
CLAHE is recommended for most images, especially those with poor lighting.
What it does: Applies image enhancement techniques to improve feature detection quality
Method: Uses various algorithms (CLAHE, histogram equalization, gamma correction, unsharp masking)
Logic: Enhances image contrast and sharpness to make keypoints more detectable
Example: CLAHE improves cloudy images, gamma correction helps with lighting variations
Purpose: Increases the number and quality of detected keypoints in challenging lighting conditions

đŸŽ¯ Quality Control Settings

Filter out false matches and improve result accuracy

?
RANSAC/Homography Filtering
Uses geometric consistency to filter out false matches from clouds and noise. Only keeps matches that follow a consistent perspective transformation. Combines RANSAC outlier detection with homography model fitting.
Recommended for images with clouds or poor visibility to improve match quality.
What it does: Applies RANSAC (Random Sample Consensus) algorithm to filter out false matches
Method: Uses homography estimation to identify geometrically consistent matches
Logic: Removes matches that don't fit the expected geometric transformation between images
Example: Filters out matches from clouds, shadows, or other non-ISS features
Purpose: Improves accuracy by removing false positive matches that would skew speed calculations
Processing...
Section 3: Overall Statistics
?
Speed Statistics
Computer Vision Speed: Calculated from feature matching between image pairs using pixel distances and GSD.
These statistics show the performance of the computer vision algorithm for ISS speed calculation.
Click "Load Data" to see statistics
Section 4: Data Filters

Apply various filters to refine your analysis and focus on the most reliable data points. Use filters to remove outliers or focus on clear images.

📊 Statistical Filters

Remove statistical outliers to improve data quality

What it does: Filters based on speed percentiles of individual matches with separate top and bottom thresholds
Method: Uses np.percentile() to calculate separate speed thresholds for bottom and top percentiles
Logic: Removes matches with speeds in the bottom X% and/or top Y% of all speeds (configurable separately)
Example: Bottom 5% + Top 10% means it removes the slowest 5% and fastest 10% of all match speeds
Purpose: Removes outlier speeds with flexible control over slow vs fast outlier removal

What it does: Filters based on number of matches per image pair
Method: Counts matches per pair and removes pairs with too few matches
Logic: Removes entire image pairs that have fewer than X matches
Example: 10 means it removes any image pair that has fewer than 10 keypoint matches
Purpose: Removes poor quality image pairs (where feature detection failed)

What it does: Filters based on statistical outliers using standard deviation
Method: Uses np.std() to calculate standard deviation and removes matches beyond X΃ from mean
Logic: Removes matches with speeds more than X standard deviations away from the mean speed
Example: 2.0΃ means it removes matches with speeds >2 standard deviations from the mean
Purpose: Removes extreme statistical outliers (very unusual speeds that might be errors)

What it does: Filters based on statistical outliers using Median Absolute Deviation
Method: Uses np.median() to calculate MAD and removes matches beyond X*MAD from median
Logic: Removes matches with speeds more than X*MAD away from the median speed
Example: 3.0*MAD means it removes matches with speeds >3*MAD from the median
Purpose: Robust outlier removal that is less sensitive to extreme values than standard deviation

đŸŒ¤ī¸ Image Quality Filters

Filter based on image characteristics and cloudiness

What it does: Filters based on image brightness and contrast to categorize cloudiness
Method: Analyzes image properties (brightness, contrast) to classify image quality
Logic: Clear images have high brightness/contrast, cloudy images have low brightness/contrast
Example: Clear: brightness â‰Ĩ120 Filters based on image brightness and contrast to remove cloudy images. Clear: high brightness (â‰Ĩ120) and contrast (â‰Ĩ55). Cloudy: low brightness (≤60) or contrast (≤40). contrast â‰Ĩ55, Cloudy: brightness ≤60 or contrast ≤40
Purpose: Removes poor quality images (cloudy/overcast conditions that affect ISS visibility)

Clear Image Thresholds
120
55
Cloudy Image Thresholds
60
40
Include Categories

đŸ›°ī¸ Ground Sample Distance (GSD) Configuration

Configure the Ground Sample Distance used for speed calculations. GSD represents the distance between pixel centers measured on the ground.

What it does: Override the default Ground Sample Distance (GSD) value
Method: Allows manual input of custom GSD value in cm/pixel
Logic: Replaces default GSD (12648 cm/pixel) with user-specified value for speed calculations
Example: 10000 cm/pixel = 100m per pixel (higher altitude), 15000 cm/pixel = 150m per pixel (lower altitude)
Purpose: Adjust speed calculations for different camera setups, altitudes, or ISS positions

Section 5: Speed Distribution (Histogram)
Section 6: Pair Analysis - Mean & Median (Click dots to view details)
Section 7: Algorithm Comparison
?
Algorithm Comparison
Compare different ISS speed calculation algorithms on the same dataset. Includes GitHub projects and other implementations for performance comparison.
This helps evaluate which approach works best for your specific images and conditions.

đŸ”Ŧ Algorithm Selection

Choose which algorithms to compare against your current results

?
GitHub Computer Vision (diyasmenon/astropi)
Basic ORB feature detection with simple outlier removal. A straightforward computer vision approach.
Good baseline for comparison with more sophisticated methods.
Method: Basic ORB feature detection
Approach: Simple outlier removal
Complexity: Low
?
SIFT-based (cchan083/AstroPi)
Uses SIFT feature detection with grayscale processing. Achieved 98.75% accuracy in testing.
High accuracy but slower processing due to SIFT algorithm complexity.
Method: SIFT feature detection
Approach: Grayscale processing + brute force matching
Complexity: High (slower but accurate)

📊 Comparison Results

⚡
Click "Run Algorithm Comparison" to see results
This will test the selected algorithms on your current dataset