vault backup: 2025-02-03 07:04:13
This commit is contained in:
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,136 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- CMakeLists.txt
|
||||
---
|
||||
CMake is a great tool to build c++ projects across different platforms and manages all dependencies as well as how to install the project on different platforms. A great guide to modern CMake can be found [here](https://cliutils.gitlab.io/modern-cmake/). And a good example project [here](https://gitlab.com/CLIUtils/modern-cmake/-/tree/master/examples/extended-project).
|
||||
|
||||
It also integrates well with the [[GTest Framework|GoogleTest]] framework, which allows to define the tests in a separate folder. They are built together with the project and executed using `ctest`.
|
||||
# CMake
|
||||
## Nomenclature
|
||||
| Definition | Meaning |
|
||||
| ---------- | ---------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Target | Executables, libraries or custom commands that can be installed. |
|
||||
| Library | A collection of code that is compiled into `lib<name>.a` or `<name>.lib` and can be used in other projects by linking to it. |
|
||||
| Executable | Is a binary file that executes a certain program. On Windows usually it is `<name>.exe`. |
|
||||
## How does it work?
|
||||
On a high level, we want to understand what CMake does.
|
||||
|
||||
## Default Setup for Simple Projects
|
||||
In most projects so far, we define the project, look for the needed dependencies, define our executable or library, link the dependencies and finally install it to the system.
|
||||
For such a simple project the file tree looks like this:
|
||||
```sh
|
||||
├── CMakeLists.txt
|
||||
├── include
|
||||
│ └── some.h
|
||||
└── src
|
||||
├── some.cpp
|
||||
└── things.h
|
||||
```
|
||||
|
||||
For this a typical CMakeLists.txt file would look something like the following:
|
||||
- [ ] Add all the important commands here for cmake to have a small snippet to use in the future #todo/b
|
||||
```cmake
|
||||
project("SomeLibrary" VERSION 0.1.0)
|
||||
|
||||
add_library(${PROJECT_NAME} src/some.cpp)
|
||||
|
||||
```
|
||||
|
||||
# CMake Targets
|
||||
## Library
|
||||
A library is a code that can be imported and used in other code and thus usually the header files that define the public classes are shared in a raw format, whereas the implementation might just be compiled into a `.so` file (or similar). Typically these files are installed into `install_path/include/library_name/...` and `install_path/lib/library_name.so`, respectively (`.so` stands for shared object). For system installs (usually invoked with `sudo make install`) the `install_path` is `/usr/local/` (if not changed manually in the cmake configuration).
|
||||
|
||||
A good tutorial and overview of what is needed in cmake can be found [here](https://iamsorush.com/posts/cpp-cmake-config/) or in [this very good description](https://cliutils.gitlab.io/modern-cmake/chapters/install/installing.html). As a general overview we need to do the following things:
|
||||
1. add library target and all dependencies
|
||||
2. make the target including specifying where it will be installed (`DESTINATION` keyword)
|
||||
3. export the target to a `library_nameTargets.cmake` file (defines all cmake related stuff that I do not care about)
|
||||
4. In order to actually install the library to be found by cmake we need to have 3 files:
|
||||
1. `library_nameConfig.cmake`: we define it ourselves and import the next file
|
||||
2. `library_nameTargets.cmake`: this is automatically written by step 3
|
||||
3. `library_nameConfigVersion.cmake`: contains information about the version of the library
|
||||
To do this with CMake we need to do the following in the main `CMakeLists.txt` file:
|
||||
```cmake
|
||||
# 1. Add library target and dependencies
|
||||
add_library(library_name SHARED) # call to create a library target
|
||||
target_include_directories(library_name PRIVATE "${PROJECT_SOURCE_DIR}") # tell the target where to find important include files
|
||||
add_subdirectory("subdirectory_name") # add subdirectories if needed - with their own CMakeLists.txt files
|
||||
|
||||
# 2. Make Install target
|
||||
# Finally we need to install the library
|
||||
# this defines the library_nameTargets variable (because of EXPORT. Nothing is actually installed)
|
||||
install(TARGETS library_name
|
||||
EXPORT library_nameTargets # this file is written here and later included
|
||||
FILE_SET HEADERS
|
||||
LIBRARY DESTINATION lib
|
||||
ARCHIVE DESTINATION lib
|
||||
RUNTIME DESTINATION bin
|
||||
INCLUDES DESTINATION include)
|
||||
|
||||
|
||||
# 3. Export the install target
|
||||
# Now we define where the file will be installed, as well as defining a CMAKE namespace. This is saved to the
|
||||
install(EXPORT library_nameTargets
|
||||
FILE library_nameTargets.cmake
|
||||
NAMESPACE libName::
|
||||
DESTINATION lib/cmake/library_name)
|
||||
# if your project has no dependencies you can replace library_nameTargets.cmake with library_nameConfig.cmake and skip the last step (no. 5), because the needed file has already been written here.
|
||||
|
||||
|
||||
# 4. write the actual .cmake files
|
||||
include(CMakePackageConfigHelpers) # load helper to create config file
|
||||
|
||||
# creates file library_nameConfigVersion.cmake which is needed when you try to find a package in another project with find_package()
|
||||
write_basic_package_version_file(
|
||||
"library_nameConfigVersion.cmake"
|
||||
VERSION ${library_name_VERSION}
|
||||
COMPATIBILITY AnyNewerVersion)
|
||||
|
||||
# 5. write the library_nameConfig.cmake file which is needed.
|
||||
# finally the install file library_nameConfig.cmake is actually copied over to library_nameConfig.cmake which is needed to find the library with find_package() in other projects
|
||||
install(FILES "library_nameConfig.cmake" "${CMAKE_CURRENT_BINARY_DIR}/library_nameConfigVersion.cmake"
|
||||
DESTINATION lib/cmake/geo)
|
||||
```
|
||||
The file `library_nameConfig.cmake` contains the following:
|
||||
```cmake
|
||||
include(CMAKEFindDependencyMacro)
|
||||
# find_dependency(xxx 2.0) # if any dependencies are needed
|
||||
# this includes the Targets.cmake file thatis created in step 3 in the cmake file above.
|
||||
include(${CMAKE_CURRENT_LIST_DIR}/library_nameTargets.cmake)
|
||||
```
|
||||
|
||||
## Executable
|
||||
|
||||
# Uninstall Target
|
||||
The default sequence to install a cmake project is the following:
|
||||
```bash
|
||||
mkdir build && cd build
|
||||
cmake ..
|
||||
cmake --build .
|
||||
sudo make install
|
||||
```
|
||||
The last command will execute the installation which basically copies important files (as specified in the CMakeLists.txt) into a system directory (usually `/usr/local/`). When this happens a file called `install_manifest.txt` is created in the build folder which lists all installed files. In order to undo the installation you can run the [following command:](https://stackoverflow.com/a/44649542/7705525)
|
||||
```bash
|
||||
xargs rm < install_manifest.txt
|
||||
```
|
||||
|
||||
If you want to get fancier you can also create a [uninstall target](https://gitlab.kitware.com/cmake/community/-/wikis/FAQ#can-i-do-make-uninstall-with-cmake), which basically iterates through the `install_manifest.txt` file and removes any file and folder (if empty) listed in the file.
|
||||
|
||||
# How is CMake used in ROS2 and Colcon
|
||||
ROS2 makes extensive use of `CMake` in its buildsystem. It is all hidden behind the `colcon` command, but the projects all contain `CMakeLists.txt`-files that define how a ROS2 package is compiled. A good example to look at source code is the [[ROS2 - NAV2 Library|NAV2]]-Library.
|
||||
- [ ] #todo/b add cmake specifics and how ament, and colcon uses it. Also how testing is done.
|
||||
- [ ] [Ament_cmake user documentation](https://docs.ros.org/en/rolling/How-To-Guides/Ament-CMake-Documentation.html) #todo/b
|
||||
- [ ]
|
||||
|
||||
# Flashcards
|
||||
#learning/cpp
|
||||
how to run unit tests (gtest) of a cmake project ;; `ctest` after having built the project
|
||||
<!--SR:!2023-12-10,14,290-->
|
||||
how to define a executable in a cmake project ;; `add_executable(name sourcefile1 sourcefile2 ...)`
|
||||
<!--SR:!2024-03-15,59,310-->
|
||||
how to define a library in a cmake project ;; `add_library(name sourcefile1 sourcefile2 ...)`
|
||||
<!--SR:!2023-12-09,13,290-->
|
||||
|
||||
# Resources
|
||||
- https://decovar.dev/blog/2021/03/08/cmake-cpp-library/
|
||||
- https://raymii.org/s/tutorials/Cpp_project_setup_with_cmake_and_unit_tests.html
|
||||
- https://iamsorush.com/posts/cpp-cmake-config/
|
||||
@@ -1,29 +0,0 @@
|
||||
A good article: https://cplusplus.com/doc/tutorial/files/
|
||||
|
||||
# Key Takeaways
|
||||
* There are two modes: binary and text. Text mode formats (e.g. ASCII) the bytes being written (and vice versa when being read), whereas in binary mode this does not happen
|
||||
* There are three libraries: `fstream`, `ofstream`, `ifstream`. f: file, o: output, i: input
|
||||
* There are position pointers that keep track of where in the file we're writing to or reading from: they are called **put** and **get** position pointers respectively
|
||||
* In order to **read the pointers** use: `tellg()` and `tellp()` for get and put, respectively.
|
||||
* In order to **change the pointers** use: `seekg(position)` and `seekp(position)` for get and put, respectively. The arguments can also be `seekg(offset, direction)`
|
||||
* There are helper flags for the offset: `ios::beg` for beginning of file, `ios::cur` for current location in the file, `ios::end` for the end of the file
|
||||
|
||||
Example:
|
||||
```c++
|
||||
// obtaining file size
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
using namespace std;
|
||||
|
||||
int main () {
|
||||
streampos begin,end;
|
||||
ifstream myfile ("example.bin", ios::binary);
|
||||
begin = myfile.tellg();
|
||||
myfile.seekg (0, ios::end);
|
||||
end = myfile.tellg();
|
||||
myfile.close();
|
||||
cout << "size is: " << (end-begin) << " bytes.\n";
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
# Dotfiles
|
||||
|
||||
# Functions
|
||||
## Arguments
|
||||
Some more info can be found [here](https://unix.stackexchange.com/a/378023/460036).
|
||||
- `$X`, where X is any number is the xth argument
|
||||
- `$@`, is a list of all arguments
|
||||
- `${@:2}`, is the a list of all arguments, starting from the second one
|
||||
- :2 basically means an offset
|
||||
|
||||
# Cheat Sheet
|
||||
## Watch
|
||||
Can be used to repeatedly execute commands like retrieving the CPU temperature:
|
||||
```bash
|
||||
❯ sensors
|
||||
k10temp-pci-00c3
|
||||
Adapter: PCI adapter
|
||||
Tctl: +77.9°C
|
||||
Tccd1: +77.5°C
|
||||
```
|
||||
|
||||
So with the watch command this will be updated at a certain interval (e.g. 1s and also update the difference `-d`): `watch -d -n 1 'sensors'`
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
# Main Commands
|
||||
|
||||
# Flashcards
|
||||
#learning/computer_science
|
||||
update local branch list with remote branch list (also deleted ones) ;; `git fetch -p` p: prune
|
||||
<!--SR:!2024-03-04,48,290-->
|
||||
reorder the last n commits ;; `git rebase -i HEAD~n` then change the pick order.
|
||||
<!--SR:!2024-03-24,68,310-->
|
||||
word to merge multiple commits into one ;; `squash`
|
||||
<!--SR:!2024-03-23,67,310-->
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
|
||||
# Alternative Protocols
|
||||
In the drone community the standard pwm has become too slow and therefore it has been adapted. It was mostly the update rate which was too slow
|
||||
|
||||
## Overview
|
||||
A good overview page can also be found [here](https://oscarliang.com/esc-firmware-protocols/).
|
||||
|
||||
|ESC Protocol|Signal Width|Lowest Update Rate|
|
||||
|---|---|---|
|
||||
|**PWM**|1000-2000us|0.5KHz|
|
||||
|**OneShot125**|125-250us|4KHz|
|
||||
|**OneShot42**|42-84us|11.9KHz|
|
||||
|**MultiShot**|5-25us|40KHz|
|
||||
|**DShot150**|106.7us|9.4KHz|
|
||||
|**DShot300**|53.3us|18.8KHz|
|
||||
|**DShot600**|26.7us|37.5KHz|
|
||||
|**DShot1200**|13.3us|75.2KHz|
|
||||
|**DShot2400**|6.7us|149.3KHz|
|
||||
## Oneshot
|
||||
Oneshot42 vs Oneshot125
|
||||
|
||||
## Multishot
|
||||
|
||||
## Proshot
|
||||
|
||||
## Dshot
|
||||
Dshot150 vs Dshot300 vs Dshot600
|
||||
Bidirectional --> feedback
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,138 +0,0 @@
|
||||
---
|
||||
title: Computer Vision
|
||||
created_date: 2024-10-22
|
||||
updated_date: 2024-10-22
|
||||
aliases:
|
||||
tags:
|
||||
---
|
||||
# Computer Vision
|
||||
|
||||
---
|
||||
- [ ] 3d reconstruction
|
||||
- [ ] camera calibration
|
||||
- [ ] photogrammetry
|
||||
- [ ] image segmentation
|
||||
- [ ] facial recognition and eigenfaces
|
||||
- [ ] image stitching
|
||||
- [ ] feature recognition
|
||||
- [ ] connection to [[LLM]]s and [[Multi Modal Models]]
|
||||
- [ ] [[Convolutional Neural Networks]]
|
||||
- [ ] [[Deep Learning]]
|
||||
- [ ] [[Signal Processing]]
|
||||
- [ ] Vision transformer (VT)
|
||||
- [ ] Tactile feedback sensors through CV
|
||||
- [ ] Structured-light 3D scanners
|
||||
- [ ] thermal cameras
|
||||
- [ ] radar imaging
|
||||
- [ ] lidar scanners
|
||||
- [ ] MRI
|
||||
- [ ] Sonar
|
||||
- [ ]
|
||||
---
|
||||
## Introduction
|
||||
- Computer Vision acquires, processes, analyzes and understands digital images
|
||||
- CV works with high dimensional data and extracts useful information from it: It transforms visual information into descriptions of the world, that make sense and can lead to appropriate decision making and action.
|
||||
- Many subdomains are known
|
||||
- Object detection and recognition
|
||||
- Event detection
|
||||
- 3D pose estimation
|
||||
- motion estimation
|
||||
- image restoration
|
||||
- Definition:
|
||||
> Computer vision is a field of AI that enables computers to interpret, understand and analyze visual data from images or videos, simulating human vision. It involves tasks like object detection, image classification, and facial recognition, with applications in areas like autonomous vehicles and medical imaging.
|
||||
|
||||
### Distinctions
|
||||
- [[Image Processing]] focuses on 2D images and how to transform an image into another image. Therefore, the input and output of image processing is an image. Thus, Image processing does not interpret nor requires assumptions about the image content
|
||||
- [[Machine Vision]] focuses on image based automation of inspection, process control, robot guidance in industrial applications. Often, image sensor technologies and [[control theory]] are closely intertwined with machine vision. Often there is interaction with the world, e.g. the lighting can be altered, etc.
|
||||
- [[Imaging]] focuses primarily on producing images and sometimes also interpreting them. E.g. [[medical imaging]] focuses on producing medical images and detecting diseases through them.
|
||||
### Foundational Techniques
|
||||
- Edge detection
|
||||
- line labelling
|
||||
- non-polyhedral and polyhedral modelling
|
||||
- optical flow
|
||||
- motion estimation
|
||||
- [[Divide and Conquer]] strategies: run CV algorithms on interesting sub ROI instead of the entire image.
|
||||
|
||||
### Applications and Tasks
|
||||
- Automate inspection
|
||||
- Identification tasks: e.g. species id
|
||||
- Controlling processes: e.g. robot
|
||||
- Detecting events: surveillance, counting, etc
|
||||
- Monitoring: health, disease, state of object, color graduation, etc.
|
||||
- modeling objects
|
||||
- navigation
|
||||
- organisation of information: indexing existing photos
|
||||
- tracking of objects, surfaces, edges
|
||||
- tactile feedback sensor: put a silicone dome with known elastic properties over a camera. On the inside are markers. When the silicone done touches something the markers move and thus a model can calculate forces and interaction with the object.
|
||||
|
||||
|
||||
---
|
||||
## Recognition
|
||||
- Object recognition: predefined objects that can be identified but not differentiated
|
||||
- Identification: specific objects are detected and individually tracked: two different people can be differentiated.
|
||||
- Detection: Object detection together with location: [[Obstacle Detection]] for robots.
|
||||
|
||||
[[Convolutional Neural Networks |CNN]]s are currently the state of the art algorithms for object detection in images. They are nearly as good as humans (only very thin objects don't work well), and even better as humans in subcategories (such as breeds of dogs or species of birds).
|
||||
|
||||
### Specialized Tasks based on recognition
|
||||
- Content-based image retrieval: give me all images with multiple dogs in them
|
||||
- Pose estimation: estimate the pose of an object relative to the camera: e.g. robot arm, human pose, obstacle, etc.
|
||||
- [[Optical Character Recognition]]: identify characters in images. Is used by many phones and even obsidian nowadays. QR-codes represent a similar task
|
||||
- [[Facial Recognition]]: matching of faces
|
||||
- Emotion recognition
|
||||
- Shape Recognition Technology (SRT)
|
||||
- (Human) Activity Recognition
|
||||
|
||||
## Motion Analysis
|
||||
Using image sequences to produce an estimate of the velocity of an object, allows to track objects (or the camera itself).
|
||||
- Egomotion: tracking the rigid 3D-motion of the camera
|
||||
- Tracking: follow the movements of objects in the frames (humans, cars, obstacles)
|
||||
- [[Optical Flow]]: determine how each point is moving relative to the image plane: combines the movement of the goal point as well as the camera movement. Can be used to do state estimation of a [[Drone]] for example.
|
||||
## Others
|
||||
- Scene reconstruction: the goal is to compute a 3D-Model of a scene from images.
|
||||
- Image restoration:
|
||||
|
||||
|
||||
---
|
||||
## Courses
|
||||
### Udacity
|
||||
The course about [computer vision](https://www.udacity.com/course/computer-vision-nanodegree--nd891). 2 Week free trial.
|
||||
1. Image Representation and Classification: numeric representation of images, color masking, binary classification
|
||||
2. Convolutional Filters and Edge Detection: frequency in images, image filters for detecting edges and shapes in images, use opencv for face detection
|
||||
3. Types of Features & Image Segmentation: corner detector, k-means clustering for segmenting an image into unique parts
|
||||
4. feature vectors: describe objects and images using feature vectors
|
||||
5. CNN layers and feature visualization: define and train your own CNN for clothing recognition, use feature visualization techniques to see what a network has learned
|
||||
6. Project: Facial Keypoint detection: create CNN for facial keypoint (eyes, mouth, nose, etc.) detection
|
||||
7. Cloud Computing with AWS: train networks on amazon's GPUs
|
||||
8. Advanced CNN architectures: region based CNNs, Faster R-CNN --> fast localized object recognition in images
|
||||
9. YOLO: multi object detection model
|
||||
10. RNN's: incorporate memory into deep learning model using recurrent neural networks. How do they learn from and generate ordered sequences of data
|
||||
11. Long Short-Term Memory Networks (LSTMs): dive into architecture and benefits of preserving long term memory
|
||||
12. Hyperparameters: what hyperparameters are used in deep learning?
|
||||
13. Attention Mechanisms: Attention models: how do they work?
|
||||
14. Image Captioning: combine CNN and RNN to build automatic image captioning model
|
||||
15. Project: Image Captioning Model: predict captions for a given image: implement an effective RNN decoder for a CNN encoder
|
||||
16. Motion: mathematical representation of motion, introduction of optical flow
|
||||
17. Robot Localization: Bayesian filter, uncertainty in robot motion
|
||||
18. Mini-Project: 2D Histogram filter: sense and move functions a 2D histogram filter
|
||||
19. Kalman Filters: intuition behind kalman filter, vehicle tracking algorithm, one-dimensional tracker implementation
|
||||
20. State and Motion: represent state of a car in vector, that can be modified using Linear Algebra
|
||||
21. Matrices and Transformation of State: LinAlg: learn matrix operations for multidimensional Kalman Filters
|
||||
22. SLAM: SLAM implementation autonomous vehicle and create map of landmarks
|
||||
23. Vehicle Motion and Calculus
|
||||
24. Project: Landmark Detection & Tracking: implement SLAM using probability, motion models and linalg
|
||||
25. Apply Deep Learning Models: Style transfer using pre-trained models that others have provided on github
|
||||
26. Feedforward and Backpropagation: introduction to neural networks feedforward pass and backpropagation
|
||||
27. Training Neural Networks: techniques to improve training
|
||||
28. Deep Learning with Pytorch: build deep learning models with pytorch
|
||||
29. Deep learning for Cancer detection: CNN detects skin cancer
|
||||
30. Sentiment Analysis: CNN for sentiment analysis
|
||||
31. Fully-convolutional neural networks: classify every pixel in an image
|
||||
32. C++ programming: getting started
|
||||
33. C++: vectors
|
||||
34. C++: local compilation
|
||||
35. C++: OOP
|
||||
36. Python and C++ Speed
|
||||
37. C++ Intro into Optimization
|
||||
38. C++ Optimization Practice
|
||||
39. Project: Optimize Histogram Filter
|
||||
@@ -1,36 +0,0 @@
|
||||
- Use layers wisely: they will make things a lot faster when rebuilding dockerfiles
|
||||
- Use multi-stage builds: like that only what is needed to execute the final project is actually included in the image. The first layer, where the image gets compiled contains all resources needed for compilation. Those are not shared with the final image --> footprint is drastically reduced
|
||||
- multi-stage builds also make compilation a lot faster, since multiple commands that depend on the first initial stage can be built simultaneously
|
||||
- Use multiple targets to build multiple images from one docker build
|
||||
|
||||
|
||||
|
||||
## Images
|
||||
- `scratch`: is a empty docker image: it is the smallest docker image there is and should be used to build shippable images
|
||||
|
||||
# Networks
|
||||
|
||||
|
||||
# Conflict with [[UFW]]
|
||||
|
||||
|
||||
# Space Problem
|
||||
Dockers default root directory is `/var/lib/docker` which is part of the the root partition (if your machine is set up with a separate root and home partition). Often the root partition does contain a lot less available space than the home partition. However, docker can use a lot of space, since it needs to pull entire images from the web and store intermediate builds as well. By default it does not delete old files. One option would be to resize the root partition, but it is cumbersome, especially in a live system.
|
||||
|
||||
What have I done to overcome this problem?
|
||||
I have deleted all docker files and then redefined where docker stores its files ([this website](https://www.baeldung.com/linux/docker-fix-no-space-error#changing-the-default-storage-location) helped a lot).
|
||||
1. Check the current docker root directory: `docker info -f '{{.DockerRootDir }}'` --> this points to `/var/lib/docker` by default.
|
||||
2. Prune all docker files. Attention, this deletes all docker related files, make sure that your data is save (if its only stored within a docker container, it will be lost):
|
||||
1. `docker container prune; docker image prune; docker volume prune`
|
||||
2. `docker system prune -a --volumes`
|
||||
3. Redefine where the docker root folder is by creating or modifying the file `/etc/docker/daemon.json`:
|
||||
1. `sudo vim /etc/docker/daemon.json`
|
||||
```json
|
||||
{
|
||||
"data-root": "/path/to/new/root/folder"
|
||||
}
|
||||
```
|
||||
4. Restart the docker daemon: `sudo systemctl restart docker`
|
||||
5. Verify the new docker root dir by rerunning the command of step 1.
|
||||
|
||||
Now you should be able to rebuild the docker containers that caused the problems with success.
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ufw
|
||||
---
|
||||
|
||||
|
||||
# Software Implementations
|
||||
## Uncomplicated Firewall - UFW
|
||||
[UFW](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) is a linux software stack to very easily implement a firewall. It works by changing [iptables](https://en.wikipedia.org/wiki/Iptables). There is a known issue when using ufw together with [[Docker#Networks|docker networking]], since both modify iptables.
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"nodes":[
|
||||
{"id":"d131a066d094a1b5","x":-300,"y":-267,"width":250,"height":50,"type":"text","text":"Image Processing\nInput"}
|
||||
],
|
||||
"edges":[]
|
||||
}
|
||||
Binary file not shown.
@@ -1,55 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- BMS
|
||||
---
|
||||
|
||||
# Research
|
||||
A [battery management system](https://en.wikipedia.org/wiki/Battery_management_system) (or BMS) is a system that manages a rechargeable battery (could be a single cell or multiple cells as a battery pack). It's main goal is to keep the battery in its safe operating area, which is usually defined by a temperature range, a voltage range and a current range that cannot be exceeded. Additionally, it might measure data (voltage, current, state of charge, etc.) and report it externally, and oftentimes it also makes sure that a battery pack remains balanced (difference in cell voltages should be as close to 0 as possible).
|
||||
## Important Properties
|
||||
- [Voltage](https://en.wikipedia.org/wiki/Voltage "Voltage"): minimum and maximum cell voltage
|
||||
- [State of charge](https://en.wikipedia.org/wiki/State_of_charge "State of charge") (SoC) or [depth of discharge](https://en.wikipedia.org/wiki/Depth_of_discharge "Depth of discharge") (DoD), to indicate the charge level of the battery
|
||||
- [State of health](https://en.wikipedia.org/wiki/State_of_health "State of health") (SoH), a variously defined measurement of the remaining capacity of the battery as % of the original capacity
|
||||
- [State of power](https://en.wikipedia.org/w/index.php?title=State_of_power&action=edit&redlink=1 "State of power (page does not exist)") (SoP), the amount of power available for a defined time interval given the current power usage, temperature and other conditions
|
||||
- State of Safety (SOS)
|
||||
- Maximum charge current as a [charge current limit](https://en.wikipedia.org/w/index.php?title=Charge_current_limit&action=edit&redlink=1 "Charge current limit (page does not exist)") (CCL)
|
||||
- Maximum discharge current as a [discharge current limit](https://en.wikipedia.org/w/index.php?title=Discharge_current_limit&action=edit&redlink=1 "Discharge current limit (page does not exist)") (DCL)
|
||||
- Energy [kWh] delivered since last charge or charge cycle
|
||||
- Internal impedance of a cell (to determine open circuit voltage)
|
||||
- Charge [Ah] delivered or stored (sometimes this feature is called
|
||||
- Total operating time since first use
|
||||
- Total number of cycles
|
||||
- Temperature Monitoring
|
||||
- Coolant flow for air or liquid cooled batteries
|
||||
|
||||
# Development
|
||||
A great overview on the important characteristics when designing a BMS can be found on [monolithic power's website](https://www.monolithicpower.com/how-to-design-a-battery-management-system-bms).
|
||||
The system consists of an analog front end (AFE) and a fuel gauge section.
|
||||
|
||||
## Analog Frontend
|
||||
The AFE handles the following:
|
||||
- cell balancing
|
||||
- main low-side sense resistor for current measurements
|
||||
- main high-side mosfet control to connect/disconnect the battery. Can we use this as an on/off switch?
|
||||
|
||||
## Fuel Gauging
|
||||
Article used: [chapter 3](https://www.ti.com/lit/ug/sluuco5a/sluuco5a.pdf?ts=1698826671597&ref_url=https%253A%252F%252Fwww.google.com%252F)
|
||||
This expression comes from the car industry, where they tried to measure the remaining fuel available in the tank. In modern times, we use much more batteries as power sources and the term fuel gauging survived the ongoing transition. For batteries it means how much energy is left that we can safely take out of the battery?
|
||||
|
||||
Texas Instruments has a technology called Impedance Track (IT) that models the battery and estimates the remaining state of charge.
|
||||
The main factors are:
|
||||
- measuring Qmax
|
||||
- measuring cell impedance
|
||||
- calculating capacities
|
||||
### Factors
|
||||
**Aging**: every cell has aging effects. Qmax and cell impedances can account for ageing effect as the cell is cycled.
|
||||
**Temperature**: the temperature is an important factor of the available charge left in the battery
|
||||
|
||||
|
||||
# Glossary
|
||||
| Definition | Meaning |
|
||||
| ---------- | ------------------------------------------------ |
|
||||
| Qmax | amount of charge available in fully charged cell |
|
||||
| SoC | State of Charge (in %) |
|
||||
| OCV | Open circuit voltage |
|
||||
| DOD | depth of discharge: during no load condition |
|
||||
| | |
|
||||
Binary file not shown.
@@ -1,6 +0,0 @@
|
||||
# Dos and Don'ts
|
||||
- Every [[copper]] patch on the PCB must be electrically defined (and thus mostly grounded). If they are left floating, they can act as
|
||||
|
||||
# Examples
|
||||
## Heatsinks and Grounding Strategies
|
||||
- It is recommended to ground heat sinks, because if they are placed above high frequency ICs (>100MHz), parasitic currents can build within the heatsink making it act just as a huge antenna and thus creates electromagnetic radiation (EMI). This might cause the entire product to fail compliance tests or might cause problems in other sensitive circuits close by.
|
||||
@@ -1,26 +0,0 @@
|
||||
The servo used in the Payloadbay behaves quite weirdly. Currently we use the standard PWM signal at 50Hz with a on-period between 800-2200 microseconds. As is visible in the scope-shot below.
|
||||
![[Servo_no_noise.png]]
|
||||
|
||||
Whenever there is a load on the servo it becomes acoustically very noisy and the current draw goes up significantly, which makes sense. But even if I stop the servo at a certain location the noise continues. In the scope on the signal line you can clearly see spikes at roughly every 2.7 milliseconds.
|
||||
![[servo_noisy.png]]
|
||||
|
||||
___
|
||||
# Experiments
|
||||
## Current draw
|
||||
When making the sound the servo draws roughly 0.5A.
|
||||
|
||||
## Voltage Drop of the Supply
|
||||
In purple we measured the power supply of the servo, while it was making the sound and in yellow the signal. We can observe a voltage drop of roughly 0.5V in the power supply at roughly 370 Hz.
|
||||
![[SDS00002.png]]
|
||||
|
||||
![[SDS00003.png]]
|
||||
|
||||
Since both the power supply and the signal are affected it can also be the ground reference that has changed, even though the voltage drop is smaller in the signal than in the power supply.
|
||||
# Possible solutions
|
||||
- just turn the servo off
|
||||
- program the servo (torque limit, PID-values)
|
||||
- a flyback diode across vcc and gnd
|
||||
- a large capacitor across vcc and gnd
|
||||
- power supply that is not powerful enough.
|
||||
- use the commanded signal to overshoot quickly (to pull it further) and then back to the desired value. This would help to imitate a larger P value to overcome the friction.
|
||||
- use feedback signal (ideally from the servo itself, or from an additional sensor)
|
||||
@@ -1,36 +0,0 @@
|
||||
# Glossary
|
||||
| Name | Meaning | Name | Meaning |
|
||||
| ---- | ---- | ---- | ---- |
|
||||
| Ambient Temperature Ta | Temperature of Air around IC | Junction Temperature Tj | Highest temperature in a semi-conductor |
|
||||
| Thermal Resistance \[°C / W] | Ability to dissipate internally generated heat. Increase in Tj per dissipated Watt of Power. This value in the datasheet is usually empirically determined. | Case Temperature Tc | Temperature of the case |
|
||||
| Maximum Junction Temperature Tjmax | device must be kept below this, else it stops working | Power dissipation Pd \[W] | Power consumed during operation |
|
||||
| | | | |
|
||||
|
||||
# Introduction
|
||||
Junction Temperature Tj is affected by:
|
||||
- ambient temperature Ta
|
||||
- Airflow / or other cooling methods
|
||||
- IC packaging material and technique (flip chip vs wire bond)
|
||||
- PCB material
|
||||
- Heat from other sources
|
||||
|
||||
The junction temperature can be decreased by adding airflow or heat sinks but it will always be above the ambient temperature.
|
||||
|
||||
# Cooling methods
|
||||
All cooling methods basically reduce thermal resistance.
|
||||
The most effective way to transport away the heat is to have a large via array below the IC to move the heat through the pcb copper to the opposite layer and distribute it into the entire board. From there it will then go into the sourroundings.
|
||||
|
||||
# Modeling
|
||||
A good explanation can be found in [this video](https://www.youtube.com/watch?v=RV6b9horB-I&ab_channel=PowerElectronics) by Martin Ordonez.
|
||||
![[Pasted image 20240220154606.png]]
|
||||
|
||||
Heat transfer happens as conduction, convection or radiation, whereas in PCB design its mostly conduction that is important (convection is important for heatsink calculations, but those can usually be found in their datasheets).
|
||||
|
||||
A thermal resistance is used to model the process (just as an electrical resistance):
|
||||
![[Pasted image 20240220154835.png]]
|
||||
|
||||
The thermal resistance depends on the material, the length and the area of the conduction path.
|
||||
|
||||
Towards the end of the video you can find details on how to calculate the final junction temperature in different scenarios (no heatsink, heatsink, forced airflow).
|
||||
# Sources
|
||||
- [Infineon Guide](https://www.infineon.com/dgdl/Infineon-AN4017_Understanding_Temperature_Specifications_An_Introduction-ApplicationNotes-v11_00-EN.pdf?fileId=8ac78c8c7cdc391c017d071d497a2703).
|
||||
Binary file not shown.
@@ -1,3 +0,0 @@
|
||||
# Opensource
|
||||
## Klipper3d
|
||||
https://www.klipper3d.org/
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1 +0,0 @@
|
||||
This article compares different actuators that are used in robotic systems.
|
||||
Binary file not shown.
@@ -1,8 +0,0 @@
|
||||
https://www.geeksforgeeks.org/a-search-algorithm/
|
||||
|
||||
|
||||
# To Remember
|
||||
1.
|
||||
|
||||
# To Study / Verify
|
||||
1. Can we add wind to heuristic function --> it is easier to move with wind instead of against the wind
|
||||
@@ -1,24 +0,0 @@
|
||||
---
|
||||
title: Divide and Conquer
|
||||
created_date: 2024-10-22
|
||||
updated_date: 2024-10-22
|
||||
aliases:
|
||||
tags:
|
||||
---
|
||||
# Divide and Conquer
|
||||
|
||||
This is an algorithm design paradigm, where the main problem is split into sub problems, which then are split into further sub problems to be finally solved on a smaller level and if needed stitched back together to the overall solution. This recursive approach is often used in [[computer science]].
|
||||
Mathematically, the algorithms are often proved to be true by [[mathematical induction]].
|
||||
## Example Algorithms
|
||||
- [[Sorting Algorithms]] such as quicksort or merge sort
|
||||
- Multiplying large numbers: [[Karatsuba Algorithm]]
|
||||
- Find the closest pair of points
|
||||
- Computing the discrete Fourier Transform ([[Fast Fourier Transform |FFT]])
|
||||
|
||||
It can also be used in [[Computer Vision]] by first defining interesting [[Region of Interests |ROI]]s and only running the heavy algorithm on the subpart of the image.
|
||||
|
||||
## Advantages
|
||||
- [[GPU]]s can be used to parallelize those subtasks and thus run the process much faster
|
||||
- simplification: the problems become simpler to solve
|
||||
- Algorithmic efficiency: reduce the [[big-O notation]]
|
||||
- Memory Access: If the problem is small enough one can use only the computer [[cache]] to solve the subproblem, which makes it way faster
|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ICP
|
||||
---
|
||||
[Iterative closest point](https://en.wikipedia.org/wiki/Iterative_closest_point) (ICP) is an algorithm to minimize the difference between two clouds of points, which means it can be used to reconstruct 2D or 3D surfaces from different scans. It is an algorithm that tries to solve the generic problem of [Point-Set registration](https://en.wikipedia.org/wiki/Point-set_registration)
|
||||
|
||||
|
||||
|
||||
# Implementations
|
||||
- The library [libpointmatcher](https://github.com/norlab-ulaval/libpointmatcher?tab=readme-ov-file)
|
||||
- The lightweight library [simpleICP](https://github.com/pglira/simpleICP)
|
||||
-
|
||||
@@ -1,3 +0,0 @@
|
||||
Mapping in robotics is the process of measuring and sensing the environment and using this information to populate a map. In the process the information is categorized (usually into obstacles and free space) into useful categories that are needed for the specific robotic application.
|
||||
|
||||
In order to start mapping we need to have information about where we are and thus [[Localization]] is required.
|
||||
@@ -1,6 +0,0 @@
|
||||
It is a open-loop control system that tries to predict and counteract vibrations in a actuated system such as a 3D printer. The first couple of minutes of [this video](https://youtu.be/Fe_BFGg_ojg) explain a bit more.
|
||||
|
||||
# Use
|
||||
## 3D Printers
|
||||
This year, 3d printers have become 10 times faster while keeping the same print quality mostly because of input shaper technology. The guy in the video above uses a frequency test suite to test for resonance peaks. Next to the large peaks there are often small peaks that often correspond to screws that are slightly loose. In the image below you can see a pre / post tightening of all screws of the frame. The first of the doublepeak around 40Hz completely disappeared after the tightening.
|
||||
![[Pasted image 20240312175212.png]]
|
||||
@@ -1,2 +0,0 @@
|
||||
Path planning is the process of finding a path between a starting pose and a goal pose.
|
||||
In order to plan a path we need a map, which is a representation of the environment. The map usually is dynamic and calculated by a process called [[Mapping]].
|
||||
Binary file not shown.
Binary file not shown.
@@ -1,31 +0,0 @@
|
||||
Also known as managed Nodes in [[ROS2]]. The [[ROS2 - NAV2 Library|NAV2]] library makes good use of it.
|
||||
From [ROS2 Design](https://design.ros2.org/articles/node_lifecycle.html):
|
||||
>A managed life cycle for nodes allows greater control over the state of ROS system. It will allow roslaunch to ensure that all components have been instantiated correctly before it allows any component to begin executing its behaviour. It will also allow nodes to be restarted or replaced on-line.
|
||||
|
||||
>The most important concept of this document is that a managed node presents a known interface, executes according to a known life cycle state machine, and otherwise can be considered a black box. This allows freedom to the node developer on how they provide the managed life cycle functionality, while also ensuring that any tools created for managing nodes can work with any compliant node.
|
||||
|
||||
There are 4 primary states: *unconfigured, inactive, active, finalized*
|
||||
There are 7 transitions: *create, configure, cleanup, activate, deactivate, shutdown and destroy*
|
||||
## States
|
||||
All nodes start with the **unconfigured** state, which is kind of like an empty state where everything starts but it might also end there
|
||||
More important is the **inactive** state. its purpose is to breath life into a node. It allows the user to read parameters, add subscriptions and publications and (re)configure it such that it can fulfill its job. This is done while the node is not running. While a node is in this state it will not receive any data from other processes.
|
||||
## Transition Callbacks
|
||||
The main functions to implement for a custom node in the lifecycle scheme are:
|
||||
### onConfigure()
|
||||
Here the things are implemented that are executed only once in the Node's lifetime, such as obtaining permanent memory buffers and setting up topic publications/subscriptions that do not change.
|
||||
|
||||
### onCleanup()
|
||||
This is the transition function that is called when a Node is being taken out of service (essentially the oposite of *onConfigure()*. Essentially it leaves the node without a state, such that there is no difference between a node that got cleaned up and another that was just created.
|
||||
|
||||
### onActivate()
|
||||
This callback is responsible to implement any final preparations before the node is executing its main purpose. Examples are acquiring resources needed for execution such as access to hardware (it should return fast without a lengthy hardware startup).
|
||||
|
||||
### onDeactivate()
|
||||
This callback should undo anything that *onActivate()* did.
|
||||
|
||||
## Management Interface
|
||||
This is a common interface to allow a managing node to manage the different lifecycle nodes accordingly.
|
||||
|
||||
## Managing Node
|
||||
This is the node that loads the different lifecycle nodes and is responsible to bring them from one state into the next and handle any error they feed back.
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
The Pluginlib is a library for [[ROS2]] that allows a very modular development. It is heavily used in the [[ROS2 - NAV2 Library|NAV2]] Library.
|
||||
From the [pluginlib tutorial]():
|
||||
>`pluginlib` is a C++ library for loading and unloading plugins from within a ROS package. Plugins are dynamically loadable classes that are loaded from a runtime library (i.e. shared object, dynamically linked library). With pluginlib, you do not have to explicitly link your application against the library containing the classes – instead `pluginlib` can open a library containing exported classes at any point without the application having any prior awareness of the library or the header file containing the class definition. Plugins are useful for extending/modifying application behavior without needing the application source code.
|
||||
|
||||
Basically it allows to define an abstract base class that defines the interface of the plugin. It defines what functions (`virtual`) need to be overwritten and what variables are there. You can then derive multiple packages with different implementations of this plugin base class which is used by an executor function. Those plugins can then be loaded at runtime without prior knowledge about them because they follow the same structure.
|
||||
|
||||
Requirements:
|
||||
1. Constructor without parameters -> use initialization function instead.
|
||||
2. Make header available to other classes
|
||||
1. Add the following snippet to `CMakeLists.txt`:
|
||||
```cmake
|
||||
install(
|
||||
DIRECTORY include/
|
||||
DESTINATION include
|
||||
)
|
||||
...
|
||||
ament_export_include_directories(
|
||||
include
|
||||
)
|
||||
```
|
||||
3. In the c++ file where you define your plugins you need to add the following macro at the very end. This creates the plugin instances when the corresponding library is loaded.
|
||||
```cpp
|
||||
#include <pluginlib/class_list_macros.hpp>
|
||||
|
||||
PLUGINLIB_EXPORT_CLASS(polygon_plugins::Square, polygon_base::RegularPolygon)
|
||||
PLUGINLIB_EXPORT_CLASS(polygon_plugins::Triangle, polygon_base::RegularPolygon)
|
||||
```
|
||||
4. The plugin loader needs some information to find the library and to know what to reference in the libary. Thus an xml-file needs to be written as well as an export line in the package.xml file. With those 2 additions ROS knows everything it needs to know in order to use the plugins. In the following snippets we have the two Plugins: Square and Triangle defined in a plugin.xml file.
|
||||
```xml
|
||||
<library path="polygon_plugins">
|
||||
<class type="polygon_plugins::Square" base_class_type="polygon_base::RegularPolygon">
|
||||
<description>This is a square plugin.</description>
|
||||
</class>
|
||||
<class type="polygon_plugins::Triangle" base_class_type="polygon_base::RegularPolygon">
|
||||
<description>This is a triangle plugin.</description>
|
||||
</class>
|
||||
</library>
|
||||
```
|
||||
|
||||
```cmake
|
||||
# polygon_base: package with base class
|
||||
# plugins.xml: relative path to plugin file defined above
|
||||
pluginlib_export_plugin_description_file(polygon_base plugins.xml)
|
||||
```
|
||||
|
||||
How to use the plugins
|
||||
The plugins can be used in any package that you want
|
||||
```cpp
|
||||
#include <pluginlib/class_loader.hpp>
|
||||
#include <polygon_base/regular_polygon.hpp>
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
// To avoid unused parameter warnings
|
||||
(void) argc;
|
||||
(void) argv;
|
||||
|
||||
pluginlib::ClassLoader<polygon_base::RegularPolygon> poly_loader("polygon_base", "polygon_base::RegularPolygon");
|
||||
|
||||
try
|
||||
{
|
||||
std::shared_ptr<polygon_base::RegularPolygon> triangle = poly_loader.createSharedInstance("polygon_plugins::Triangle");
|
||||
triangle->initialize(10.0);
|
||||
|
||||
std::shared_ptr<polygon_base::RegularPolygon> square = poly_loader.createSharedInstance("polygon_plugins::Square");
|
||||
square->initialize(10.0);
|
||||
|
||||
printf("Triangle area: %.2f\n", triangle->area());
|
||||
printf("Square area: %.2f\n", square->area());
|
||||
}
|
||||
catch(pluginlib::PluginlibException& ex)
|
||||
{
|
||||
printf("The plugin failed to load for some reason. Error: %s\n", ex.what());
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
>Important note: the `polygon_base` package in which this node is defined does NOT depend on the `polygon_plugins` class. The plugins will be loaded dynamically without any dependency needing to be declared. Furthermore, we’re instantiating the classes with hardcoded plugin names, but you can also do so dynamically with parameters, etc.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
- [ ] #todo/b Write overview of what ros2 does in my own words. Advantages / disadvantages
|
||||
|
||||
# Build System
|
||||
The ROS2 [build system](https://docs.ros.org/en/humble/Concepts/Advanced/About-Build-System.html)is a challenging part, because packages written in different languages such as [[C++]] or [[Python]] need to be built together in order to form a unit.
|
||||
To achieve this ROS2 relies heavily on the [[Colcon]] build system, which under the hood uses [[CMake]] for C++ packages and setuptools for Python. In order to define dependencies across the different packages and languages, ROS2 packages always contain a `package.xml` file also known as manifest file that contains essential metadata about the package, such as dependencies and others.
|
||||
|
||||
Reference in New Issue
Block a user