First Commit

This commit is contained in:
2024-12-02 15:11:30 +01:00
commit 031f6004de
4688 changed files with 441558 additions and 0 deletions

View File

@@ -0,0 +1,136 @@
---
aliases:
- CMakeLists.txt
---
CMake is a great tool to build c++ projects across different platforms and manages all dependencies as well as how to install the project on different platforms. A great guide to modern CMake can be found [here](https://cliutils.gitlab.io/modern-cmake/). And a good example project [here](https://gitlab.com/CLIUtils/modern-cmake/-/tree/master/examples/extended-project).
It also integrates well with the [[GTest Framework|GoogleTest]] framework, which allows to define the tests in a separate folder. They are built together with the project and executed using `ctest`.
# CMake
## Nomenclature
| Definition | Meaning |
| ---------- | ---------------------------------------------------------------------------------------------------------------------------- |
| Target | Executables, libraries or custom commands that can be installed. |
| Library | A collection of code that is compiled into `lib<name>.a` or `<name>.lib` and can be used in other projects by linking to it. |
| Executable | Is a binary file that executes a certain program. On Windows usually it is `<name>.exe`. |
## How does it work?
On a high level, we want to understand what CMake does.
## Default Setup for Simple Projects
In most projects so far, we define the project, look for the needed dependencies, define our executable or library, link the dependencies and finally install it to the system.
For such a simple project the file tree looks like this:
```sh
├── CMakeLists.txt
├── include
│ └── some.h
└── src
├── some.cpp
└── things.h
```
For this a typical CMakeLists.txt file would look something like the following:
- [ ] Add all the important commands here for cmake to have a small snippet to use in the future #todo/b
```cmake
project("SomeLibrary" VERSION 0.1.0)
add_library(${PROJECT_NAME} src/some.cpp)
```
# CMake Targets
## Library
A library is a code that can be imported and used in other code and thus usually the header files that define the public classes are shared in a raw format, whereas the implementation might just be compiled into a `.so` file (or similar). Typically these files are installed into `install_path/include/library_name/...` and `install_path/lib/library_name.so`, respectively (`.so` stands for shared object). For system installs (usually invoked with `sudo make install`) the `install_path` is `/usr/local/` (if not changed manually in the cmake configuration).
A good tutorial and overview of what is needed in cmake can be found [here](https://iamsorush.com/posts/cpp-cmake-config/) or in [this very good description](https://cliutils.gitlab.io/modern-cmake/chapters/install/installing.html). As a general overview we need to do the following things:
1. add library target and all dependencies
2. make the target including specifying where it will be installed (`DESTINATION` keyword)
3. export the target to a `library_nameTargets.cmake` file (defines all cmake related stuff that I do not care about)
4. In order to actually install the library to be found by cmake we need to have 3 files:
1. `library_nameConfig.cmake`: we define it ourselves and import the next file
2. `library_nameTargets.cmake`: this is automatically written by step 3
3. `library_nameConfigVersion.cmake`: contains information about the version of the library
To do this with CMake we need to do the following in the main `CMakeLists.txt` file:
```cmake
# 1. Add library target and dependencies
add_library(library_name SHARED) # call to create a library target
target_include_directories(library_name PRIVATE "${PROJECT_SOURCE_DIR}") # tell the target where to find important include files
add_subdirectory("subdirectory_name") # add subdirectories if needed - with their own CMakeLists.txt files
# 2. Make Install target
# Finally we need to install the library
# this defines the library_nameTargets variable (because of EXPORT. Nothing is actually installed)
install(TARGETS library_name
EXPORT library_nameTargets # this file is written here and later included
FILE_SET HEADERS
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
INCLUDES DESTINATION include)
# 3. Export the install target
# Now we define where the file will be installed, as well as defining a CMAKE namespace. This is saved to the
install(EXPORT library_nameTargets
FILE library_nameTargets.cmake
NAMESPACE libName::
DESTINATION lib/cmake/library_name)
# if your project has no dependencies you can replace library_nameTargets.cmake with library_nameConfig.cmake and skip the last step (no. 5), because the needed file has already been written here.
# 4. write the actual .cmake files
include(CMakePackageConfigHelpers) # load helper to create config file
# creates file library_nameConfigVersion.cmake which is needed when you try to find a package in another project with find_package()
write_basic_package_version_file(
"library_nameConfigVersion.cmake"
VERSION ${library_name_VERSION}
COMPATIBILITY AnyNewerVersion)
# 5. write the library_nameConfig.cmake file which is needed.
# finally the install file library_nameConfig.cmake is actually copied over to library_nameConfig.cmake which is needed to find the library with find_package() in other projects
install(FILES "library_nameConfig.cmake" "${CMAKE_CURRENT_BINARY_DIR}/library_nameConfigVersion.cmake"
DESTINATION lib/cmake/geo)
```
The file `library_nameConfig.cmake` contains the following:
```cmake
include(CMAKEFindDependencyMacro)
# find_dependency(xxx 2.0) # if any dependencies are needed
# this includes the Targets.cmake file thatis created in step 3 in the cmake file above.
include(${CMAKE_CURRENT_LIST_DIR}/library_nameTargets.cmake)
```
## Executable
# Uninstall Target
The default sequence to install a cmake project is the following:
```bash
mkdir build && cd build
cmake ..
cmake --build .
sudo make install
```
The last command will execute the installation which basically copies important files (as specified in the CMakeLists.txt) into a system directory (usually `/usr/local/`). When this happens a file called `install_manifest.txt` is created in the build folder which lists all installed files. In order to undo the installation you can run the [following command:](https://stackoverflow.com/a/44649542/7705525)
```bash
xargs rm < install_manifest.txt
```
If you want to get fancier you can also create a [uninstall target](https://gitlab.kitware.com/cmake/community/-/wikis/FAQ#can-i-do-make-uninstall-with-cmake), which basically iterates through the `install_manifest.txt` file and removes any file and folder (if empty) listed in the file.
# How is CMake used in ROS2 and Colcon
ROS2 makes extensive use of `CMake` in its buildsystem. It is all hidden behind the `colcon` command, but the projects all contain `CMakeLists.txt`-files that define how a ROS2 package is compiled. A good example to look at source code is the [[ROS2 - NAV2 Library|NAV2]]-Library.
- [ ] #todo/b add cmake specifics and how ament, and colcon uses it. Also how testing is done.
- [ ] [Ament_cmake user documentation](https://docs.ros.org/en/rolling/How-To-Guides/Ament-CMake-Documentation.html) #todo/b
- [ ]
# Flashcards
#learning/cpp
how to run unit tests (gtest) of a cmake project ;; `ctest` after having built the project
<!--SR:!2023-12-10,14,290-->
how to define a executable in a cmake project ;; `add_executable(name sourcefile1 sourcefile2 ...)`
<!--SR:!2024-03-15,59,310-->
how to define a library in a cmake project ;; `add_library(name sourcefile1 sourcefile2 ...)`
<!--SR:!2023-12-09,13,290-->
# Resources
- https://decovar.dev/blog/2021/03/08/cmake-cpp-library/
- https://raymii.org/s/tutorials/Cpp_project_setup_with_cmake_and_unit_tests.html
- https://iamsorush.com/posts/cpp-cmake-config/

View File

@@ -0,0 +1,23 @@
---
aliases:
- GoogleTest
- GTest
---
Also known as gtest is a unit testing library for C++. It is also used by ROS2.
# Flashcards
#learning/cpp
which GTest to use to compare two `std::string` objects? ;; `EXPECT_EQ(s1, s2)`
<!--SR:!2023-12-12,16,290-->
which GTest to use to compare two `char[]` objects? ;; `EXPECT_STREQ(s1, s2)`
<!--SR:!2023-12-13,17,290-->
difference between `ASSERT_EQ` and `EXPECT_EQ`;; **assert** will stop execution if it fails, **expect** will run all following code as well and only report at the end.
<!--SR:!2024-03-08,52,314-->
`Ctest` is not finding any tests ;; in `CMakeLists.txt`: `enable_testing()` needs to before `include_directories(test)`. Else it does not work
<!--SR:!2024-01-25,9,274-->
# Resources
Get started quickly with [[CMake]]: https://gitlab.cern.ch/google/googletest/-/blob/master/docs/quickstart-cmake.md
A good resource:
[IBM C++ Testing documentation](https://developer.ibm.com/articles/au-googletestingframework/#list2)

View File

@@ -0,0 +1,29 @@
A good article: https://cplusplus.com/doc/tutorial/files/
# Key Takeaways
* There are two modes: binary and text. Text mode formats (e.g. ASCII) the bytes being written (and vice versa when being read), whereas in binary mode this does not happen
* There are three libraries: `fstream`, `ofstream`, `ifstream`. f: file, o: output, i: input
* There are position pointers that keep track of where in the file we're writing to or reading from: they are called **put** and **get** position pointers respectively
* In order to **read the pointers** use: `tellg()` and `tellp()` for get and put, respectively.
* In order to **change the pointers** use: `seekg(position)` and `seekp(position)` for get and put, respectively. The arguments can also be `seekg(offset, direction)`
* There are helper flags for the offset: `ios::beg` for beginning of file, `ios::cur` for current location in the file, `ios::end` for the end of the file
Example:
```c++
// obtaining file size
#include <iostream>
#include <fstream>
using namespace std;
int main () {
streampos begin,end;
ifstream myfile ("example.bin", ios::binary);
begin = myfile.tellg();
myfile.seekg (0, ios::end);
end = myfile.tellg();
myfile.close();
cout << "size is: " << (end-begin) << " bytes.\n";
return 0;
}
```

View File

@@ -0,0 +1,57 @@
I have been wanting to code properly with unit tests and continuous integration for a few years now and have never done it properly.
# Development Process
## Unit Testing
### C++
In [[C++]] we use the [[GTest Framework|GoogleTest]] Framework.
## Code Coverage
Code coverage testing basically compiles the code with profiling flags, which count the number of times a specific line has been executed. And since google test runs the source code we can test if we are testing all the code that we have written.
For C++ typically, a tool called `gcovr` is used. In order for it to work you need to compile the code with the following flags: `-fprofile-arcs` and `-ftest-coverage`
If the project uses [[CMake]] you can add the following to your CMakeLists.txt:
```cmake
if(CMAKE_CXX_COMPILER_ID MATCHES GNU)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fprofile-arcs -ftest-coverage")
endif()
```
# Online Tools
In order to do continuous integration we need a place that automatically runs our unit tests whenever a commit is pushed.
## GitLab
Gitlab is known for its [[Gitlab - CI CD|CICD]] (Continuous Integration / Continuous Deployment) pipeline. It allows to customize and automate the entire process very effectively. The setup process is also very easy and explained well on [their website](https://docs.gitlab.com/ee/ci/quick_start/).
1. You must have runners[^1] available
2. Create a `.gitlab-ci.yml` file at the root of your repository. The entire pipeline of automated tests is defined in this file.
A good example repo can be found [here](https://github.com/pothitos/gtest-demo-gitlab/tree/master).
My own first gitlab-ci config file (`.gitlab-ci.yml`):
```yaml
image: ubuntu:20.04
job:
script:
- export DEBIAN_FRONTEND=noninteractive
- apt-get update
- apt-get install -y cmake g++ build-essential git gcovr
- cd onesec3d_cpp
- mkdir build && cd build
- cmake -DBUILD_TESTING=ON ..
- cmake --build .
- export DATADIR=../test/sample_3d_files/
- ./test/onesec3d_cpp_test
- gcovr --exclude-directories '_deps' -r ..
```
# Flashcards
#learning/cpp
which command to do code coverage testing;;`gcovr`
<!--SR:!2023-12-21,14,290-->
requirements for `gcovr`;; project must be built with flags: `-fprofile-arcs` and `-ftest-coverage`
<!--SR:!2023-12-08,1,210-->
# Resources
- CI for robotics projects with [[ROS2]]: use [[Gazebo]] that is run on the gitlab server: http://moore-mike.com/bobble-ci.html
# Footnotes
[^1]: GitLab Runner is an application that can execute CI/CD jobs in a GitLab pipeline

View File

@@ -0,0 +1,9 @@
---
aliases:
- CICD
- gitlab
---
# Resources
https://github.com/pothitos/gtest-demo-gitlab/blob/master/CMakeLists.txt

View File

@@ -0,0 +1,10 @@
# Terminator
- A tabular terminal emulator that makes working only with the keyboard much easier
- Make terminator default terminal: `sudo update-alternatives --config x-terminal-emulator`
- Right click terminator terminal, go to preferences and change font to meslo, and change to other color theme.
# Oh-my-Zsh
- [ ] Add installation snippet here (or gist) #todo/b
# Dotfiles
- [ ] Add dotfile management system to configure your machine #todo/b

View File

@@ -0,0 +1,23 @@
# Dotfiles
# Functions
## Arguments
Some more info can be found [here](https://unix.stackexchange.com/a/378023/460036).
- `$X`, where X is any number is the xth argument
- `$@`, is a list of all arguments
- `${@:2}`, is the a list of all arguments, starting from the second one
- :2 basically means an offset
# Cheat Sheet
## Watch
Can be used to repeatedly execute commands like retrieving the CPU temperature:
```bash
sensors
k10temp-pci-00c3
Adapter: PCI adapter
Tctl: +77.9°C
Tccd1: +77.5°C
```
So with the watch command this will be updated at a certain interval (e.g. 1s and also update the difference `-d`): `watch -d -n 1 'sensors'`

View File

@@ -0,0 +1,7 @@
#learning/computer_science
# VIM
search in vim ;; in vim: `/SEARCH_TERM`, then `N` for next, `shift` + `N` for previous
<!--SR:!2024-03-11,55,310-->
# Nano
cut and paste a line in nano ;;; in nano: `ctrl`+`K`, then `ctrl`+`U`
<!--SR:!2024-01-30,14,292!2023-12-21,9,270-->

View File

@@ -0,0 +1,11 @@
# Main Commands
# Flashcards
#learning/computer_science
update local branch list with remote branch list (also deleted ones) ;; `git fetch -p` p: prune
<!--SR:!2024-03-04,48,290-->
reorder the last n commits ;; `git rebase -i HEAD~n` then change the pick order.
<!--SR:!2024-03-24,68,310-->
word to merge multiple commits into one ;; `squash`
<!--SR:!2024-03-23,67,310-->

View File

@@ -0,0 +1,28 @@
# Alternative Protocols
In the drone community the standard pwm has become too slow and therefore it has been adapted. It was mostly the update rate which was too slow
## Overview
A good overview page can also be found [here](https://oscarliang.com/esc-firmware-protocols/).
|ESC Protocol|Signal Width|Lowest Update Rate|
|---|---|---|
|**PWM**|1000-2000us|0.5KHz|
|**OneShot125**|125-250us|4KHz|
|**OneShot42**|42-84us|11.9KHz|
|**MultiShot**|5-25us|40KHz|
|**DShot150**|106.7us|9.4KHz|
|**DShot300**|53.3us|18.8KHz|
|**DShot600**|26.7us|37.5KHz|
|**DShot1200**|13.3us|75.2KHz|
|**DShot2400**|6.7us|149.3KHz|
## Oneshot
Oneshot42 vs Oneshot125
## Multishot
## Proshot
## Dshot
Dshot150 vs Dshot300 vs Dshot600
Bidirectional --> feedback

View File

@@ -0,0 +1,138 @@
---
title: Computer Vision
created_date: 2024-10-22
updated_date: 2024-10-22
aliases:
tags:
---
# Computer Vision
---
- [ ] 3d reconstruction
- [ ] camera calibration
- [ ] photogrammetry
- [ ] image segmentation
- [ ] facial recognition and eigenfaces
- [ ] image stitching
- [ ] feature recognition
- [ ] connection to [[LLM]]s and [[Multi Modal Models]]
- [ ] [[Convolutional Neural Networks]]
- [ ] [[Deep Learning]]
- [ ] [[Signal Processing]]
- [ ] Vision transformer (VT)
- [ ] Tactile feedback sensors through CV
- [ ] Structured-light 3D scanners
- [ ] thermal cameras
- [ ] radar imaging
- [ ] lidar scanners
- [ ] MRI
- [ ] Sonar
- [ ]
---
## Introduction
- Computer Vision acquires, processes, analyzes and understands digital images
- CV works with high dimensional data and extracts useful information from it: It transforms visual information into descriptions of the world, that make sense and can lead to appropriate decision making and action.
- Many subdomains are known
- Object detection and recognition
- Event detection
- 3D pose estimation
- motion estimation
- image restoration
- Definition:
> Computer vision is a field of AI that enables computers to interpret, understand and analyze visual data from images or videos, simulating human vision. It involves tasks like object detection, image classification, and facial recognition, with applications in areas like autonomous vehicles and medical imaging.
### Distinctions
- [[Image Processing]] focuses on 2D images and how to transform an image into another image. Therefore, the input and output of image processing is an image. Thus, Image processing does not interpret nor requires assumptions about the image content
- [[Machine Vision]] focuses on image based automation of inspection, process control, robot guidance in industrial applications. Often, image sensor technologies and [[control theory]] are closely intertwined with machine vision. Often there is interaction with the world, e.g. the lighting can be altered, etc.
- [[Imaging]] focuses primarily on producing images and sometimes also interpreting them. E.g. [[medical imaging]] focuses on producing medical images and detecting diseases through them.
### Foundational Techniques
- Edge detection
- line labelling
- non-polyhedral and polyhedral modelling
- optical flow
- motion estimation
- [[Divide and Conquer]] strategies: run CV algorithms on interesting sub ROI instead of the entire image.
### Applications and Tasks
- Automate inspection
- Identification tasks: e.g. species id
- Controlling processes: e.g. robot
- Detecting events: surveillance, counting, etc
- Monitoring: health, disease, state of object, color graduation, etc.
- modeling objects
- navigation
- organisation of information: indexing existing photos
- tracking of objects, surfaces, edges
- tactile feedback sensor: put a silicone dome with known elastic properties over a camera. On the inside are markers. When the silicone done touches something the markers move and thus a model can calculate forces and interaction with the object.
---
## Recognition
- Object recognition: predefined objects that can be identified but not differentiated
- Identification: specific objects are detected and individually tracked: two different people can be differentiated.
- Detection: Object detection together with location: [[Obstacle Detection]] for robots.
[[Convolutional Neural Networks |CNN]]s are currently the state of the art algorithms for object detection in images. They are nearly as good as humans (only very thin objects don't work well), and even better as humans in subcategories (such as breeds of dogs or species of birds).
### Specialized Tasks based on recognition
- Content-based image retrieval: give me all images with multiple dogs in them
- Pose estimation: estimate the pose of an object relative to the camera: e.g. robot arm, human pose, obstacle, etc.
- [[Optical Character Recognition]]: identify characters in images. Is used by many phones and even obsidian nowadays. QR-codes represent a similar task
- [[Facial Recognition]]: matching of faces
- Emotion recognition
- Shape Recognition Technology (SRT)
- (Human) Activity Recognition
## Motion Analysis
Using image sequences to produce an estimate of the velocity of an object, allows to track objects (or the camera itself).
- Egomotion: tracking the rigid 3D-motion of the camera
- Tracking: follow the movements of objects in the frames (humans, cars, obstacles)
- [[Optical Flow]]: determine how each point is moving relative to the image plane: combines the movement of the goal point as well as the camera movement. Can be used to do state estimation of a [[Drone]] for example.
## Others
- Scene reconstruction: the goal is to compute a 3D-Model of a scene from images.
- Image restoration:
---
## Courses
### Udacity
The course about [computer vision](https://www.udacity.com/course/computer-vision-nanodegree--nd891). 2 Week free trial.
1. Image Representation and Classification: numeric representation of images, color masking, binary classification
2. Convolutional Filters and Edge Detection: frequency in images, image filters for detecting edges and shapes in images, use opencv for face detection
3. Types of Features & Image Segmentation: corner detector, k-means clustering for segmenting an image into unique parts
4. feature vectors: describe objects and images using feature vectors
5. CNN layers and feature visualization: define and train your own CNN for clothing recognition, use feature visualization techniques to see what a network has learned
6. Project: Facial Keypoint detection: create CNN for facial keypoint (eyes, mouth, nose, etc.) detection
7. Cloud Computing with AWS: train networks on amazon's GPUs
8. Advanced CNN architectures: region based CNNs, Faster R-CNN --> fast localized object recognition in images
9. YOLO: multi object detection model
10. RNN's: incorporate memory into deep learning model using recurrent neural networks. How do they learn from and generate ordered sequences of data
11. Long Short-Term Memory Networks (LSTMs): dive into architecture and benefits of preserving long term memory
12. Hyperparameters: what hyperparameters are used in deep learning?
13. Attention Mechanisms: Attention models: how do they work?
14. Image Captioning: combine CNN and RNN to build automatic image captioning model
15. Project: Image Captioning Model: predict captions for a given image: implement an effective RNN decoder for a CNN encoder
16. Motion: mathematical representation of motion, introduction of optical flow
17. Robot Localization: Bayesian filter, uncertainty in robot motion
18. Mini-Project: 2D Histogram filter: sense and move functions a 2D histogram filter
19. Kalman Filters: intuition behind kalman filter, vehicle tracking algorithm, one-dimensional tracker implementation
20. State and Motion: represent state of a car in vector, that can be modified using Linear Algebra
21. Matrices and Transformation of State: LinAlg: learn matrix operations for multidimensional Kalman Filters
22. SLAM: SLAM implementation autonomous vehicle and create map of landmarks
23. Vehicle Motion and Calculus
24. Project: Landmark Detection & Tracking: implement SLAM using probability, motion models and linalg
25. Apply Deep Learning Models: Style transfer using pre-trained models that others have provided on github
26. Feedforward and Backpropagation: introduction to neural networks feedforward pass and backpropagation
27. Training Neural Networks: techniques to improve training
28. Deep Learning with Pytorch: build deep learning models with pytorch
29. Deep learning for Cancer detection: CNN detects skin cancer
30. Sentiment Analysis: CNN for sentiment analysis
31. Fully-convolutional neural networks: classify every pixel in an image
32. C++ programming: getting started
33. C++: vectors
34. C++: local compilation
35. C++: OOP
36. Python and C++ Speed
37. C++ Intro into Optimization
38. C++ Optimization Practice
39. Project: Optimize Histogram Filter

View File

@@ -0,0 +1,36 @@
- Use layers wisely: they will make things a lot faster when rebuilding dockerfiles
- Use multi-stage builds: like that only what is needed to execute the final project is actually included in the image. The first layer, where the image gets compiled contains all resources needed for compilation. Those are not shared with the final image --> footprint is drastically reduced
- multi-stage builds also make compilation a lot faster, since multiple commands that depend on the first initial stage can be built simultaneously
- Use multiple targets to build multiple images from one docker build
## Images
- `scratch`: is a empty docker image: it is the smallest docker image there is and should be used to build shippable images
# Networks
# Conflict with [[UFW]]
# Space Problem
Dockers default root directory is `/var/lib/docker` which is part of the the root partition (if your machine is set up with a separate root and home partition). Often the root partition does contain a lot less available space than the home partition. However, docker can use a lot of space, since it needs to pull entire images from the web and store intermediate builds as well. By default it does not delete old files. One option would be to resize the root partition, but it is cumbersome, especially in a live system.
What have I done to overcome this problem?
I have deleted all docker files and then redefined where docker stores its files ([this website](https://www.baeldung.com/linux/docker-fix-no-space-error#changing-the-default-storage-location) helped a lot).
1. Check the current docker root directory: `docker info -f '{{.DockerRootDir }}'` --> this points to `/var/lib/docker` by default.
2. Prune all docker files. Attention, this deletes all docker related files, make sure that your data is save (if its only stored within a docker container, it will be lost):
1. `docker container prune; docker image prune; docker volume prune`
2. `docker system prune -a --volumes`
3. Redefine where the docker root folder is by creating or modifying the file `/etc/docker/daemon.json`:
1. `sudo vim /etc/docker/daemon.json`
```json
{
"data-root": "/path/to/new/root/folder"
}
```
4. Restart the docker daemon: `sudo systemctl restart docker`
5. Verify the new docker root dir by rerunning the command of step 1.
Now you should be able to rebuild the docker containers that caused the problems with success.

View File

@@ -0,0 +1,9 @@
---
aliases:
- ufw
---
# Software Implementations
## Uncomplicated Firewall - UFW
[UFW](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) is a linux software stack to very easily implement a firewall. It works by changing [iptables](https://en.wikipedia.org/wiki/Iptables). There is a known issue when using ufw together with [[Docker#Networks|docker networking]], since both modify iptables.

View File

@@ -0,0 +1,105 @@
---
title: SQL
created_date: 2024-10-28
updated_date: 2024-10-28
aliases:
tags:
---
# SQL
SQL stands for structured query language and is used to retrieve entries from a [[Database]].
Many different implementations exist such as MySQL, SQLite
## Crash Course Takeaways
```SQL
SELECT (DISTINCT) [Column names, or * for everything]
FROM table_name
WHERE selection_statements;
ORDER BY column_name (DESC)
-- only return 20 entries
LIMIT 20
```
> [!Info] DISTINCT statement
> This will make sure that only unique entries are returned. It is used to filter out duplicates, which is a very important step when doing statistics on a dataset. Be cautious though, depending on the SELECT statement it might filter out actual datapoints instead of duplicates (e.g. SELECT DISTINCT last_name ... would only return one of several siblings)
### Filter
- Use selection statement after the **WHERE** keyword
- e.g. `WHERE last_name="Connor"` or `WHERE age <= 18` or `WHERE nationality in ("Swiss", "Italian", "French")`
- `AND` , `OR`, `NOT` are logical operators
- `WHERE name Like "%blue%"` returns any string with blue in it `(%` are placeholders)
- `WHERE name Like "____"` returns all names with 4 characters
- `WHERE name IS (NOT) NULL`: filter out nulls
### Sort
You can sort with the Keyword `ORDER BY`. If you want to reverse the order you can add the `DESC` statement at the end.
### CASE Statement
The `CASE` statement allows to change data on the fly (e.g. grouping) without changing the underlying database entries.
```SQL
SELECT EmployeeName,
CASE
WHEN EmpLevel = 1 THEN 'Data Analyst'
WHEN EmpLevel = 2 THEN 'Middle Manager'
WHEN EmpLevel = 3 THEN 'Senior Executive'
ELSE 'Unemployed'
END
FROM Employees;
```
### Limit Keyword
With the `LIMIT` keyword you can limit the number of rows the query returns.
### Functions
- `COUNT` returns the number of entries of a query: `SELECT COUNT(*) FROM names;`
- `SUM`: sums all entries of the query
- `MIN`
- `MAX`
- `AVG`
-
Functions can be executed with subgroups of the returned DB-entries. In order to achieve this the `GROUP BY column_name` statement is used (just as in the [[Pandas]] library).
```SQL
--- The example counts players from the same team
SELECT Team, COUNT(PlayerID)
FROM Players
GROUP BY TEAM;
```
If you want to filter a second time within the subgroups of the `GROUP BY` statement you can use the `HAVING` keyword.
```SQL
-- This only uses Account Types with more than 100 Accounts in the uppermost count aggregation
SELECT AccountType, COUNT(AccountID)
FROM Accounts
GROUP BY AccountType
HAVING COUNT(AccountID) > 100;
```
#### COALESCE to default nulls
`COALESCE`: replace nulls with a default value
```SQL
SELECT AVG(COALESCE(HolidaysTaken, 0))
FROM AnnualLeave;
-- or select on of the 3 numbers as a number
SELECT
CustomerName,
COALESCE(HomePhone, MobilePhone, BusinessPhone) as PhoneNumber
FROM Customers;
```
### Joins
#### Inner Join (aka. join)
Joins columns of two tables, if and only if, all datapoints exist in both tables.
#### Left Join (aka. left outer join)
This join includes all rows of the first table even if there is no match in the second table.
### INSERT INTO
is used to insert rows into database tables
### UPDATE
is used to change an existing database entry
### DELETE
is used to delete all entries that the queries returns: careful, this is dangerous!
## Examples
- The queries used in Obsidians Dataview are quite similar.

View File

@@ -0,0 +1,6 @@
{
"nodes":[
{"id":"d131a066d094a1b5","x":-300,"y":-267,"width":250,"height":50,"type":"text","text":"Image Processing\nInput"}
],
"edges":[]
}

View File

@@ -0,0 +1,55 @@
---
aliases:
- BMS
---
# Research
A [battery management system](https://en.wikipedia.org/wiki/Battery_management_system) (or BMS) is a system that manages a rechargeable battery (could be a single cell or multiple cells as a battery pack). It's main goal is to keep the battery in its safe operating area, which is usually defined by a temperature range, a voltage range and a current range that cannot be exceeded. Additionally, it might measure data (voltage, current, state of charge, etc.) and report it externally, and oftentimes it also makes sure that a battery pack remains balanced (difference in cell voltages should be as close to 0 as possible).
## Important Properties
- [Voltage](https://en.wikipedia.org/wiki/Voltage "Voltage"): minimum and maximum cell voltage
- [State of charge](https://en.wikipedia.org/wiki/State_of_charge "State of charge") (SoC) or [depth of discharge](https://en.wikipedia.org/wiki/Depth_of_discharge "Depth of discharge") (DoD), to indicate the charge level of the battery
- [State of health](https://en.wikipedia.org/wiki/State_of_health "State of health") (SoH), a variously defined measurement of the remaining capacity of the battery as % of the original capacity
- [State of power](https://en.wikipedia.org/w/index.php?title=State_of_power&action=edit&redlink=1 "State of power (page does not exist)") (SoP), the amount of power available for a defined time interval given the current power usage, temperature and other conditions
- State of Safety (SOS)
- Maximum charge current as a [charge current limit](https://en.wikipedia.org/w/index.php?title=Charge_current_limit&action=edit&redlink=1 "Charge current limit (page does not exist)") (CCL)
- Maximum discharge current as a [discharge current limit](https://en.wikipedia.org/w/index.php?title=Discharge_current_limit&action=edit&redlink=1 "Discharge current limit (page does not exist)") (DCL)
- Energy [kWh] delivered since last charge or charge cycle
- Internal impedance of a cell (to determine open circuit voltage)
- Charge [Ah] delivered or stored (sometimes this feature is called
- Total operating time since first use
- Total number of cycles
- Temperature Monitoring
- Coolant flow for air or liquid cooled batteries
# Development
A great overview on the important characteristics when designing a BMS can be found on [monolithic power's website](https://www.monolithicpower.com/how-to-design-a-battery-management-system-bms).
The system consists of an analog front end (AFE) and a fuel gauge section.
## Analog Frontend
The AFE handles the following:
- cell balancing
- main low-side sense resistor for current measurements
- main high-side mosfet control to connect/disconnect the battery. Can we use this as an on/off switch?
## Fuel Gauging
Article used: [chapter 3](https://www.ti.com/lit/ug/sluuco5a/sluuco5a.pdf?ts=1698826671597&ref_url=https%253A%252F%252Fwww.google.com%252F)
This expression comes from the car industry, where they tried to measure the remaining fuel available in the tank. In modern times, we use much more batteries as power sources and the term fuel gauging survived the ongoing transition. For batteries it means how much energy is left that we can safely take out of the battery?
Texas Instruments has a technology called Impedance Track (IT) that models the battery and estimates the remaining state of charge.
The main factors are:
- measuring Qmax
- measuring cell impedance
- calculating capacities
### Factors
**Aging**: every cell has aging effects. Qmax and cell impedances can account for ageing effect as the cell is cycled.
**Temperature**: the temperature is an important factor of the available charge left in the battery
# Glossary
| Definition | Meaning |
| ---------- | ------------------------------------------------ |
| Qmax | amount of charge available in fully charged cell |
| SoC | State of Charge (in %) |
| OCV | Open circuit voltage |
| DOD | depth of discharge: during no load condition |
| | |

View File

@@ -0,0 +1,43 @@
I think the best development and documentation approach is the following:
- README: detailed documentation of everything, version controlled with iterations. We can also have multiple .md files instead of just one.
- requirements
- design decisions
- components search (IC comparison, connectors, etc.
- Bringup instructions
- soldering instructions
- debug instructions
- reference to design documents (e.g. from TI) in resource folder
- Software needed to use the pcb (maybe link to board software package repository)
- gitlab:
- track possible requirements for future iterations as issues tagged with `requirement`
- track issues during reviews --> also acts as discussion for design decisions --> when decision is taken, the issue should be referenced in the README
- confluence: We can also track this in gitlab as CONFLUENCE.md and then import it into confluence
- have a summary of the README that is focused solely on the usage of the pcb.
- should look like a datasheet:
- min max voltage, min max current, etc.
- minimum connections needed
- if needed instructions on how to flash software or use software to program correctly
- Should have a small troubleshooting guide that links to more detailed instructions in the README
- should have a dos and donts section: what not to do at all costs.
- Finally the google drive should be used solely for [[PCB Testing|test tracking]] and detailed test documentation with the templates that we have for development

View File

@@ -0,0 +1,46 @@
# Process
Generally speaking the review process happens twice, once the schematics are finished, before the design starts and then once the actual PCB-design is done.
## Schematics Review
1. Component selection: do all components fulfill their requirements?
## Board Layout Review
This review happens after the placement of the components is finalized and before routing. The step file of the pcb including all component 3D models should be integrated into the CAD of the device. Also include cables in the mechanical review.
This review should happen between the mechanical team, the assembly team and the electronics team.
1. Is the general shape correct
2. Does the mounting system work?
1. including accessibiliy to connectors or other interfaces when mounted?
2. including stack height when cables are connected
3. are the major subsystems at the correct location on the pcb?
1. Are connectors at the correct location to minimize cable length
4. Revisit component selection: does the component size make sense? Do connector sizes make sense?
Once this review passes make sure that all components are ordered and available.
## Design Review
This review happens after the PCB got routed and is ready for production.
1. Create a PCB Design Review Google Sheet from [this template](https://docs.google.com/spreadsheets/d/1-4jNAzJX_8TotVCinnkB4rk5VresIWIjQV7i3lEITDk/edit).
2. Open the 3D view in Kicad (alt + 3) and review the mechanical properties
1. mounting holes? Stacking height of connectors? Does it fit at the place where it is intended to fit?
2. overlapping components?
3. Does the Silkscreen look right? Is it clear what pad is what? and what connector is what? Polarity?
3. check main ICs first:
1. split PCB into several sections, one section per main IC.
2. double check datasheets and their design recommendations for every IC. and check if the recommendations are applied as accurate as possible
4. Then follow generic checklist for the rest of the pcb review (see google sheets linked below.)
### Production Files Review
Finally the gerber files, drill files and any files that are shared with the manufacturer should be exported and reviewed as well.
# Tools
## Gitlab
We use gitlab, git and Kicad for the design process. This allows us to have a versioning history of the PCBs and collaborate easily across the globe. Gitlab is also a great way to track open features that we need to implement, possible issues we come across or future ideas that might want to be added to a certain board. Within the gitlab issues we can have conversations about a certain decision and they are always in one place to understand the reasoning for a decision even later on. Slack can be used to discuss issues and we can link discussions between the two tools.
Every gitlab project should have a README.md which is the main landing page if someone wants to work with that specific PCB in more depth. (Could we sync the readme with a Confluence article? - [seems possible](https://github.com/zonkyio/confluence-sync/blob/master/README.md) - this would allow non-technical users to have documentation).
## Diff Tool
Since KiCad is open source there are people that build tools to improve the development process. The tool for version comparison that we use is [KiRi (Kicad Revision Inspector)](https://github.com/leoheck/kiri) and we're using it for reviews of later versions to see what has changed since the last iteration.
![[Pasted image 20240129105628.png]]
## Test Tracking Tool
## Google Sheets

View File

@@ -0,0 +1,6 @@
The drone company Freefly has [published extensive testing documents](https://freefly.gitbook.io/freefly-public/products/alta-x/testing/test-documentation) of their drone Alta-X. Basically they have a google sheet that summarizes all tests and some high level information about those test and then there is a detailed test report linked to the main sheet.
![[Pasted image 20240130152623.png]]
I have created a template for both the [overview sheet](https://docs.google.com/spreadsheets/d/1ZVGZX8_EhPCgwdFf9EMEJC3kuFvgf5XS2nXmJZtcvYs/edit#gid=0) and the [detailed test report](https://docs.google.com/document/d/1f-5OXSd6dn2Ha0qU43wb2jsbQDjBMwj65kqMBwrAo80/edit#heading=h.qi86e7v7x1qw) for PCB testing, which can be created at the start of every project. Whenever the designer has an idea of what should be tested, it should be added here immediately.
Obviously this concept can be extended to the the mechanical hardware, the software as well as the entire drone as an integrated test.

View File

@@ -0,0 +1,6 @@
# Dos and Don'ts
- Every [[copper]] patch on the PCB must be electrically defined (and thus mostly grounded). If they are left floating, they can act as
# Examples
## Heatsinks and Grounding Strategies
- It is recommended to ground heat sinks, because if they are placed above high frequency ICs (>100MHz), parasitic currents can build within the heatsink making it act just as a huge antenna and thus creates electromagnetic radiation (EMI). This might cause the entire product to fail compliance tests or might cause problems in other sensitive circuits close by.

View File

@@ -0,0 +1,26 @@
The servo used in the Payloadbay behaves quite weirdly. Currently we use the standard PWM signal at 50Hz with a on-period between 800-2200 microseconds. As is visible in the scope-shot below.
![[Servo_no_noise.png]]
Whenever there is a load on the servo it becomes acoustically very noisy and the current draw goes up significantly, which makes sense. But even if I stop the servo at a certain location the noise continues. In the scope on the signal line you can clearly see spikes at roughly every 2.7 milliseconds.
![[servo_noisy.png]]
___
# Experiments
## Current draw
When making the sound the servo draws roughly 0.5A.
## Voltage Drop of the Supply
In purple we measured the power supply of the servo, while it was making the sound and in yellow the signal. We can observe a voltage drop of roughly 0.5V in the power supply at roughly 370 Hz.
![[SDS00002.png]]
![[SDS00003.png]]
Since both the power supply and the signal are affected it can also be the ground reference that has changed, even though the voltage drop is smaller in the signal than in the power supply.
# Possible solutions
- just turn the servo off
- program the servo (torque limit, PID-values)
- a flyback diode across vcc and gnd
- a large capacitor across vcc and gnd
- power supply that is not powerful enough.
- use the commanded signal to overshoot quickly (to pull it further) and then back to the desired value. This would help to imitate a larger P value to overcome the friction.
- use feedback signal (ideally from the servo itself, or from an additional sensor)

View File

@@ -0,0 +1,15 @@
---
aliases:
- Buck Converter
- Boost Converter
---
# Buck Converter
A buck converter, also known as step-down converter, converts a high input voltage into a regulated lower output voltage, by switching the supply on and off at a high frequency and filtering the output.
# Boost Converter
A boost converter, also known as step-up converter, converts a low input voltage into a higher output voltage by leveraging the stored energy in an inductor to increase the voltage (because no large current can flow).
![[Pasted image 20231120095014.png]]
Screenshot from [this video](https://www.youtube.com/watch?v=9QM55r5fnUk&ab_channel=TheOrganicChemistryTutor).
# Resources
http://www.runonielsen.dk/Buck_with_subharm.pdf

View File

@@ -0,0 +1,36 @@
# Glossary
| Name | Meaning | Name | Meaning |
| ---- | ---- | ---- | ---- |
| Ambient Temperature Ta | Temperature of Air around IC | Junction Temperature Tj | Highest temperature in a semi-conductor |
| Thermal Resistance \[°C / W] | Ability to dissipate internally generated heat. Increase in Tj per dissipated Watt of Power. This value in the datasheet is usually empirically determined. | Case Temperature Tc | Temperature of the case |
| Maximum Junction Temperature Tjmax | device must be kept below this, else it stops working | Power dissipation Pd \[W] | Power consumed during operation |
| | | | |
# Introduction
Junction Temperature Tj is affected by:
- ambient temperature Ta
- Airflow / or other cooling methods
- IC packaging material and technique (flip chip vs wire bond)
- PCB material
- Heat from other sources
The junction temperature can be decreased by adding airflow or heat sinks but it will always be above the ambient temperature.
# Cooling methods
All cooling methods basically reduce thermal resistance.
The most effective way to transport away the heat is to have a large via array below the IC to move the heat through the pcb copper to the opposite layer and distribute it into the entire board. From there it will then go into the sourroundings.
# Modeling
A good explanation can be found in [this video](https://www.youtube.com/watch?v=RV6b9horB-I&ab_channel=PowerElectronics) by Martin Ordonez.
![[Pasted image 20240220154606.png]]
Heat transfer happens as conduction, convection or radiation, whereas in PCB design its mostly conduction that is important (convection is important for heatsink calculations, but those can usually be found in their datasheets).
A thermal resistance is used to model the process (just as an electrical resistance):
![[Pasted image 20240220154835.png]]
The thermal resistance depends on the material, the length and the area of the conduction path.
Towards the end of the video you can find details on how to calculate the final junction temperature in different scenarios (no heatsink, heatsink, forced airflow).
# Sources
- [Infineon Guide](https://www.infineon.com/dgdl/Infineon-AN4017_Understanding_Temperature_Specifications_An_Introduction-ApplicationNotes-v11_00-EN.pdf?fileId=8ac78c8c7cdc391c017d071d497a2703).

View File

@@ -0,0 +1,8 @@
# Feedback Servo
Feedback servos have a fourth wire that sends the position information of the sensor back to the controller. It is the same feedback information that the servo uses internally to do the PID-position-control.
# Programmable Servos
Programmable servos
# Reasons
1. Initialization: when we start the Flight controller it will initialize the servo signal to the PWM value specified in the `PWM_MAIN_5_DIS` parameter. If the servo is physically at a different location during startup it will slam the servo to the correct position which is a large strain on the system (servo as well as mechanics) and will result in a large vibration.
2.

View File

@@ -0,0 +1,2 @@
# Flameshot
Best Screenshot tool out there

View File

@@ -0,0 +1,3 @@
# Opensource
## Klipper3d
https://www.klipper3d.org/

View File

@@ -0,0 +1,3 @@
Freefly has published the assembly steps of their Alta-X drone. This can be useful for our own setup.
![[AX_ManufacturingFlowChart_02.pdf]]

View File

@@ -0,0 +1,3 @@
Damped Fasteners
![[Pasted image 20240423163505.png]]
- good to damp vibrations

View File

@@ -0,0 +1 @@
This article compares different actuators that are used in robotic systems.

View File

@@ -0,0 +1,8 @@
https://www.geeksforgeeks.org/a-search-algorithm/
# To Remember
1.
# To Study / Verify
1. Can we add wind to heuristic function --> it is easier to move with wind instead of against the wind

View File

@@ -0,0 +1,24 @@
---
title: Divide and Conquer
created_date: 2024-10-22
updated_date: 2024-10-22
aliases:
tags:
---
# Divide and Conquer
This is an algorithm design paradigm, where the main problem is split into sub problems, which then are split into further sub problems to be finally solved on a smaller level and if needed stitched back together to the overall solution. This recursive approach is often used in [[computer science]].
Mathematically, the algorithms are often proved to be true by [[mathematical induction]].
## Example Algorithms
- [[Sorting Algorithms]] such as quicksort or merge sort
- Multiplying large numbers: [[Karatsuba Algorithm]]
- Find the closest pair of points
- Computing the discrete Fourier Transform ([[Fast Fourier Transform |FFT]])
It can also be used in [[Computer Vision]] by first defining interesting [[Region of Interests |ROI]]s and only running the heavy algorithm on the subpart of the image.
## Advantages
- [[GPU]]s can be used to parallelize those subtasks and thus run the process much faster
- simplification: the problems become simpler to solve
- Algorithmic efficiency: reduce the [[big-O notation]]
- Memory Access: If the problem is small enough one can use only the computer [[cache]] to solve the subproblem, which makes it way faster

View File

@@ -0,0 +1,12 @@
---
aliases:
- ICP
---
[Iterative closest point](https://en.wikipedia.org/wiki/Iterative_closest_point) (ICP) is an algorithm to minimize the difference between two clouds of points, which means it can be used to reconstruct 2D or 3D surfaces from different scans. It is an algorithm that tries to solve the generic problem of [Point-Set registration](https://en.wikipedia.org/wiki/Point-set_registration)
# Implementations
- The library [libpointmatcher](https://github.com/norlab-ulaval/libpointmatcher?tab=readme-ov-file)
- The lightweight library [simpleICP](https://github.com/pglira/simpleICP)
-

View File

@@ -0,0 +1,13 @@
---
title: Karatsuba Algorithm
created_date: 2024-10-22
updated_date: 2024-10-22
aliases:
tags:
big-O: 1.58
---
# Karatsuba Algorithm
The Karatsuba Algorithm is used to multiply two numbers. It uses a [[Divide and Conquer]] strategy to make it computationally efficient, having a [[big-O notation]] of $n^{log_{2}3}\approx n^{1.58}$.
The main takeaway of the algorithm is that it uses a higher base to perform three easier and faster multiplications with less digits and then subtracts corrective terms to get the correct result

View File

@@ -0,0 +1,47 @@
Sources
https://www.youtube.com/watch?v=kRp3eA09JkM&ab_channel=HummingbirdRobotics
https://github.com/thehummingbird/robotics_demos/blob/main/behavior_trees/grasp_place_robot_demo/bt_demo.cpp
- https://github.com/polymathrobotics/ros2_behavior_tree_example/tree/main/src/plugins
- https://github.com/BehaviorTree/BehaviorTree.CPP/issues/412
- https://github.com/Adlink-ROS/BT_ros2
# Using BT with ROS2
https://www.youtube.com/watch?v=KO4S0Lsba6I&ab_channel=TheConstruct
![[Pasted image 20231005132551.png]]
![[Pasted image 20231005134909.png]]
![[Pasted image 20231005135001.png]]
});
![[Pasted image 20231005135120.png]]
### Move Robot
![[Pasted image 20231005135254.png]]
![[Pasted image 20231005135348.png]]
![[Pasted image 20231005135419.png]]
Rotate class
![[Pasted image 20231005135545.png]]
main
![[Pasted image 20231005135626.png]]
![[Pasted image 20231005135643.png]]
## Definitions
- Sequence: can be considered as a logical AND gate
- Fallback: can be considered as a logical OR gate
# Library Description
Alternatively you can use the python library: https://github.com/splintered-reality/py_trees_ros/tree/devel
## Behaviortree Factory
Here the behaviortree logic is stored and managed. Nodes that implement actions or conditions need to be registered here such that they can be executed. The BehaviorTree Factory is the place where the [[Business Logic]] is implemented.
# BehaviorTree Design
![[Pasted image 20231005172207.png]]

View File

@@ -0,0 +1,3 @@
Mapping in robotics is the process of measuring and sensing the environment and using this information to populate a map. In the process the information is categorized (usually into obstacles and free space) into useful categories that are needed for the specific robotic application.
In order to start mapping we need to have information about where we are and thus [[Localization]] is required.

View File

@@ -0,0 +1,6 @@
It is a open-loop control system that tries to predict and counteract vibrations in a actuated system such as a 3D printer. The first couple of minutes of [this video](https://youtu.be/Fe_BFGg_ojg) explain a bit more.
# Use
## 3D Printers
This year, 3d printers have become 10 times faster while keeping the same print quality mostly because of input shaper technology. The guy in the video above uses a frequency test suite to test for resonance peaks. Next to the large peaks there are often small peaks that often correspond to screws that are slightly loose. In the image below you can see a pre / post tightening of all screws of the frame. The first of the doublepeak around 40Hz completely disappeared after the tightening.
![[Pasted image 20240312175212.png]]

View File

@@ -0,0 +1,2 @@
Path planning is the process of finding a path between a starting pose and a goal pose.
In order to plan a path we need a map, which is a representation of the environment. The map usually is dynamic and calculated by a process called [[Mapping]].

View File

@@ -0,0 +1,41 @@
[[ROS2]] is a bit tedious to debug, because it is inherently asynchronous and multi threaded.
A good way is to use VSCode for example like this:
https://gist.github.com/JADC362/a4425c2d05cdaadaaa71b697b674425f
As always the nav2 library is a good place for resources: [get backtrace explanation](https://navigation.ros.org/tutorials/docs/get_backtrace.html).
## Requirements
In order to run this we need to install gdbserver: `sudo apt install gdbserver`
## Launch Files
In the launch file, when adding a Node, add a `prefix="gdbserver localhost:3000"`:
```python
Node(
package='nav3_controller',
executable='controller_server',
output='screen',
respawn=use_respawn,
respawn_delay=2.0,
parameters=[configured_params],
prefix='gdbserver localhost:3000',
arguments=['--ros-args', '--log-level', log_level],
remappings=remappings + [('offboard_cmd', 'offboard_velocity_cmd')]
)
```
This will create gdb as a server. Now we need to configure VSCode to attach to this debug session (as explained [here](https://answers.ros.org/question/267261/how-can-i-run-ros2-nodes-in-a-debugger-eg-gdb/)) add a new launch configuration like this:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "C++ Debugger",
"request": "launch",
"type": "cppdbg",
"miDebuggerServerAddress": "localhost:3000",
"cwd": "/",
"program": "[build-path-executable]"
}
]
}
```

View File

@@ -0,0 +1,31 @@
Also known as managed Nodes in [[ROS2]]. The [[ROS2 - NAV2 Library|NAV2]] library makes good use of it.
From [ROS2 Design](https://design.ros2.org/articles/node_lifecycle.html):
>A managed life cycle for nodes allows greater control over the state of ROS system. It will allow roslaunch to ensure that all components have been instantiated correctly before it allows any component to begin executing its behaviour. It will also allow nodes to be restarted or replaced on-line.
>The most important concept of this document is that a managed node presents a known interface, executes according to a known life cycle state machine, and otherwise can be considered a black box. This allows freedom to the node developer on how they provide the managed life cycle functionality, while also ensuring that any tools created for managing nodes can work with any compliant node.
There are 4 primary states: *unconfigured, inactive, active, finalized*
There are 7 transitions: *create, configure, cleanup, activate, deactivate, shutdown and destroy*
## States
All nodes start with the **unconfigured** state, which is kind of like an empty state where everything starts but it might also end there
More important is the **inactive** state. its purpose is to breath life into a node. It allows the user to read parameters, add subscriptions and publications and (re)configure it such that it can fulfill its job. This is done while the node is not running. While a node is in this state it will not receive any data from other processes.
## Transition Callbacks
The main functions to implement for a custom node in the lifecycle scheme are:
### onConfigure()
Here the things are implemented that are executed only once in the Node's lifetime, such as obtaining permanent memory buffers and setting up topic publications/subscriptions that do not change.
### onCleanup()
This is the transition function that is called when a Node is being taken out of service (essentially the oposite of *onConfigure()*. Essentially it leaves the node without a state, such that there is no difference between a node that got cleaned up and another that was just created.
### onActivate()
This callback is responsible to implement any final preparations before the node is executing its main purpose. Examples are acquiring resources needed for execution such as access to hardware (it should return fast without a lengthy hardware startup).
### onDeactivate()
This callback should undo anything that *onActivate()* did.
## Management Interface
This is a common interface to allow a managing node to manage the different lifecycle nodes accordingly.
## Managing Node
This is the node that loads the different lifecycle nodes and is responsible to bring them from one state into the next and handle any error they feed back.

View File

@@ -0,0 +1,40 @@
---
aliases:
- NAV2
- Navigation
- ROS2
---
- How is the NAV2 library structured?
- What are the core concepts?
- How can we utilize it to improve our own code?
# ROS2 and NAV2
Because navigation is usually a long running task, [[ROS2 - NAV2 Library|NAV2]] uses [[ROS2]] actions (servers and clients) to implement and execute navigation tasks. Actions servers implement the actual execution in a separate thread and thus can be running in a blocking manner (a [shared Future object](https://en.wikipedia.org/wiki/Futures_and_promises) is used to communicate Feedback and results). The feedback can be shared both synchronously through callbacks and asynchronously through requesting information from the shared future object. In any case we spinning the client node is required.
## Lifecycle Nodes (aka. Mangaged Nodes)
NAV2 relies heavily on [[ROS2 - Lifecycle Nodes]], because it helps to structure the program in reasonable ways for commercial uses and debugging.
All servers in NAV2 use the lifecycle nodes and it is the best convention for all ROS systems to use lifecycle nodes if possible.
# Behavior Trees
[[Behaviour Trees]] are used as the main concept to implement complex tasks and the application logic. In order to do that behaviors are broken down into primitives (very basic behavior).
From [NAV2 documentation:](https://navigation.ros.org/concepts/index.html)
>For this project, we use [BehaviorTree CPP V3](https://www.behaviortree.dev/) as the behavior tree library. We create node plugins which can be constructed into a tree, inside the `BT Navigator`. The node plugins are loaded into the BT and when the XML file of the tree is parsed, the registered names are associated. At this point, we can march through the behavior tree to navigate.
# Source Code Walk Through
The version of nav2 I used to write our own nav3 library was: 7009ffba on october 16th 2023.
## NAV2 Common
This package contains only launchfiles that implement classes to facilitate writing other launch files. Functions like rewriting part of a file, replacing strings, check if node parameters are available, etc.
# Navigation Servers
## Planners
## Controllers
There is a single GoalChecker Plugin that implements the GoalChecker interface. It is called SimpleGoalChecker
# Navigation illumination Video
[Vimeo Video](https://vimeo.com/106994708)
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/106994708?h=5972d2a502" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
Controller (Local Planner)
![[Pasted image 20231019145329.png]]

View File

@@ -0,0 +1,83 @@
The Pluginlib is a library for [[ROS2]] that allows a very modular development. It is heavily used in the [[ROS2 - NAV2 Library|NAV2]] Library.
From the [pluginlib tutorial]():
>`pluginlib` is a C++ library for loading and unloading plugins from within a ROS package. Plugins are dynamically loadable classes that are loaded from a runtime library (i.e. shared object, dynamically linked library). With pluginlib, you do not have to explicitly link your application against the library containing the classes instead `pluginlib` can open a library containing exported classes at any point without the application having any prior awareness of the library or the header file containing the class definition. Plugins are useful for extending/modifying application behavior without needing the application source code.
Basically it allows to define an abstract base class that defines the interface of the plugin. It defines what functions (`virtual`) need to be overwritten and what variables are there. You can then derive multiple packages with different implementations of this plugin base class which is used by an executor function. Those plugins can then be loaded at runtime without prior knowledge about them because they follow the same structure.
Requirements:
1. Constructor without parameters -> use initialization function instead.
2. Make header available to other classes
1. Add the following snippet to `CMakeLists.txt`:
```cmake
install(
DIRECTORY include/
DESTINATION include
)
...
ament_export_include_directories(
include
)
```
3. In the c++ file where you define your plugins you need to add the following macro at the very end. This creates the plugin instances when the corresponding library is loaded.
```cpp
#include <pluginlib/class_list_macros.hpp>
PLUGINLIB_EXPORT_CLASS(polygon_plugins::Square, polygon_base::RegularPolygon)
PLUGINLIB_EXPORT_CLASS(polygon_plugins::Triangle, polygon_base::RegularPolygon)
```
4. The plugin loader needs some information to find the library and to know what to reference in the libary. Thus an xml-file needs to be written as well as an export line in the package.xml file. With those 2 additions ROS knows everything it needs to know in order to use the plugins. In the following snippets we have the two Plugins: Square and Triangle defined in a plugin.xml file.
```xml
<library path="polygon_plugins">
<class type="polygon_plugins::Square" base_class_type="polygon_base::RegularPolygon">
<description>This is a square plugin.</description>
</class>
<class type="polygon_plugins::Triangle" base_class_type="polygon_base::RegularPolygon">
<description>This is a triangle plugin.</description>
</class>
</library>
```
```cmake
# polygon_base: package with base class
# plugins.xml: relative path to plugin file defined above
pluginlib_export_plugin_description_file(polygon_base plugins.xml)
```
How to use the plugins
The plugins can be used in any package that you want
```cpp
#include <pluginlib/class_loader.hpp>
#include <polygon_base/regular_polygon.hpp>
int main(int argc, char** argv)
{
// To avoid unused parameter warnings
(void) argc;
(void) argv;
pluginlib::ClassLoader<polygon_base::RegularPolygon> poly_loader("polygon_base", "polygon_base::RegularPolygon");
try
{
std::shared_ptr<polygon_base::RegularPolygon> triangle = poly_loader.createSharedInstance("polygon_plugins::Triangle");
triangle->initialize(10.0);
std::shared_ptr<polygon_base::RegularPolygon> square = poly_loader.createSharedInstance("polygon_plugins::Square");
square->initialize(10.0);
printf("Triangle area: %.2f\n", triangle->area());
printf("Square area: %.2f\n", square->area());
}
catch(pluginlib::PluginlibException& ex)
{
printf("The plugin failed to load for some reason. Error: %s\n", ex.what());
}
return 0;
}
```
>Important note: the `polygon_base` package in which this node is defined does NOT depend on the `polygon_plugins` class. The plugins will be loaded dynamically without any dependency needing to be declared. Furthermore, were instantiating the classes with hardcoded plugin names, but you can also do so dynamically with parameters, etc.

View File

@@ -0,0 +1,6 @@
- [ ] #todo/b Write overview of what ros2 does in my own words. Advantages / disadvantages
# Build System
The ROS2 [build system](https://docs.ros.org/en/humble/Concepts/Advanced/About-Build-System.html)is a challenging part, because packages written in different languages such as [[C++]] or [[Python]] need to be built together in order to form a unit.
To achieve this ROS2 relies heavily on the [[Colcon]] build system, which under the hood uses [[CMake]] for C++ packages and setuptools for Python. In order to define dependencies across the different packages and languages, ROS2 packages always contain a `package.xml` file also known as manifest file that contains essential metadata about the package, such as dependencies and others.

View File

@@ -0,0 +1,13 @@
---
title: AnySkin
created_date: 2024-10-28
updated_date: 2024-10-28
aliases:
tags:
- sensor
---
# AnySkin
This is a skin sensor, which detects tactile touch by measuring distortions in magnetic fields, generated by iron particles in a silicon moulded skin. The skin is therefore soft and can be fabricated in any shape.
## Source
[AnySkin: Plug-and-play Skin Sensing for Robotic Touch](https://any-skin.github.io/)

View File

@@ -0,0 +1,7 @@
[This article ](https://robohub.org/anatomy-of-a-robotic-system/) does a good job in describing the different levels of abstraction within a robotic system. The key takeaways for me are:
- In a robot there must be a high level feedback loop (not control systems) within the robot that does not require human input, else it is a machine.
- A robot operates in the physical world. A chat-bot is not a robot.
- Several layers of abstractions can be used to define robotic behavior:
- Functional layer: control, raw sensory perception, actuation and speech generation (move arm to position XY, apply force Z)
- Behavioral layer: motion planning, navigation and making sense of language (open the door --> which makes use of the functional layer)
- Abstract layer: Task and behavior planning ([[Behaviour Trees]] or Finite State Machines), Semantic Understanding and Reasoning (What is our environment made of, can I interact with it? What do I want to achieve and what can I do to succeed?)