From a2e42353d28a93bcf5bd1ca8a4ede0e1148c137d Mon Sep 17 00:00:00 2001 From: Obsidian-MBPM4 Date: Wed, 26 Mar 2025 16:31:50 +0100 Subject: [PATCH] vault backup: 2025-03-26 16:31:49 --- 3 Knowledge/Computer Vision - Depth Perception.md | 2 +- .../10 Projects/Requirements/Requirements Gathering.md | 7 ++++++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/3 Knowledge/Computer Vision - Depth Perception.md b/3 Knowledge/Computer Vision - Depth Perception.md index 71c16d6..0f616c7 100644 --- a/3 Knowledge/Computer Vision - Depth Perception.md +++ b/3 Knowledge/Computer Vision - Depth Perception.md @@ -8,7 +8,7 @@ tags: # Computer Vision - Depth Perception ## Stereo Vision ### Disparity Map -[This blog article](https://www.baeldung.com/cs/disparity-map-stereo-vision) explains nicely what disparity maps are. +[This blog article](https://www.baeldung.com/cs/disparity-map-stereo-vision) explains nicely what disparity maps are. And [this blog article](https://johnwlambert.github.io/stereo/) goes more into the math and physics of it. - The disparity is the apparent motion of objects between a pair of stereo images[^1]. - The depth is inversely proportional to the disparity. If we know the arrangement of the cameras, then the disparity map can be converted into a depth map using triangulation. - When disparity is near zero (far away) then small differences produce large depth differences. When disparity is large, small disparity differences do not change the depth significantly. Hence, stereo vision systems have high depth resolution only for objects relatively near the camera. diff --git a/99 Work/0 OneSec/OneSecNotes/10 Projects/Requirements/Requirements Gathering.md b/99 Work/0 OneSec/OneSecNotes/10 Projects/Requirements/Requirements Gathering.md index d93b1c2..e65e17b 100644 --- a/99 Work/0 OneSec/OneSecNotes/10 Projects/Requirements/Requirements Gathering.md +++ b/99 Work/0 OneSec/OneSecNotes/10 Projects/Requirements/Requirements Gathering.md @@ -9,4 +9,9 @@ tags: - Ensure that cameras are well calibrated - for stereo vision: vibration, relative motion of the cameras, when the wings flap, etc. See [here](https://youtu.be/GpU1Vx-b3VA?t=187). - for the AI model - - #osd/question do we account for camera calibration in the dataset generation? \ No newline at end of file + - #osd/question do we account for camera calibration in the dataset generation? +- parameter system: the drone needs to be fully configurable through a parameter system. +- For smooth delivery, we must be able to pull the payload bay away from under the payload in the last moments of the ejection procedure. Therefore, the payload axis motor/servo must be able to accelerate downwards with at least 1g  (9.81 m/s2) acceleration at the outer edge of the payload bay. Also the max speed has to be such that we can pull the payload edge down by ~4 centimeters in around ~50ms. Probably it's more helpful to focus on this max speed figure than the acceleration. From a standing start, the actuator needs to be able to pull the payload edge down by 4cm in 50ms +- payload system must be able to withstand strong external torque, from a human operator who is forcefully trying to open or close the payload bay by hand +- the payload axis actuator (motor, position sensor etc) has to be waterproof or waterresistant +- the payload axis actuator must have enough torque to close the payload bay under the most extreme payload mass condition. Which is roughly 1.5kg of payload mass, 10cm off axis, meaning 15kgcm of **dynamic** torque on the axis \ No newline at end of file