This is an old revision of the document!
Table of Contents
Those are random notes I took when confronted to unique or recurrent problems. Be careful as some of them could be obsolete.
Erase password instead of hitting backspace like a monkey
C-u
usually erases everything from the start of line to the cursor.
Make a transparent GIF from transparent PNGs
$ magick -delay 10 -loop 0 -dispose Background *.png output.gif
Delay is in ms.
Mail server troubleshooting
After the expiration of my emails.myserver1.com
and emails.myserver2.com
certificates, I had troubles with some mail client to send mails. As a temporary solution I have configured msmtp
(~/.config/msmtp/config
) disabling certificates verification but keeping fingerprint checking. Here is how to check for fingerprints:
for server in "myserver1" "myserver2"; do openssl s_client -connect emails.${server}.com:465 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin done
And in ~/.config/msmtp/config
I have put tls_fingerprint <fingerprint>
.
In /etc/cron.d/certbot
I added 0 */12 * * * root certbot renew --no-random-sleep-on-renew && systemctl restart postfix dovecot
Upgrade every OpenWrt packages
opkg list-upgradable | cut -f 1 -d ' ' | xargs -r opkg upgrade
Extract URLs from PDF
Some links cannot be seen with pdftotext. I used this trick from SO:
$ nb_pages=220 # set the number of pages of pdf $ pdftk file.pdf cat 1-$nb_pages output - | strings | grep -E "https?://"
Dump mysql database
$ USERNAME=bouzin $ DATABASE_NAME=something $ mysqldump -u $USERNAME -p $DATABASE_NAME > dump.sql
Sort in place and keep unique
sort -uo file.txt{,}
Process substitution to filter an output to file but keep whole output in terminal output
echo "hello world" | tee >(grep -o world >> output.txt)
This will print “hello world” on screen, but keep only “world” in output.txt.
Firefox history command line
# web history sorted by date $ sqlite3 ~/.mozilla/firefox/**xxxx**.default-release-**xxxx**/places.sqlite "SELECT datetime(moz_historyvisits.visit_date/1000000,'unixepoch'), moz_places.url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id"|sort # unique urls visited $ sqlite3 ~/.mozilla/firefox/**xxxx**.default-release-**xxxx**/places.sqlite "SELECT url FROM moz_places"
https://aurelieherbelot.net/web/read-firefox-history-linux-terminal/
Convert file to binary then back to original format with vim
Something I have used to modify firmwares when I was tweaking the NB6VAC. Use xxd
in vim:
:%! xxd -b # this convert the file to binary. make your changes then :%! xxd -r # this reverts
MITM Android phone traffic
I have used a man in the middle technique to try to capture traffic from my Android phone. (Goal: intercept packets from a CloudEdge application I have now good reason not trusting…)
- Start kali linux, be sure every devices are on same WiFi (!)
- sudo airmon-ng start wlan1
- wireshark > capture on “wlan0” interface
- filter: “ip.addr==my_phone_ip”
- sudo ettercap
- “scan for hosts”, “hosts list” > select router IP (target1) and phone IP (target2)
- “current targets”
- “MITM” > “start ARP poisoning”
- wireshark should start showing traffic
Add zsh interactive comments option
Interactive comments is by default on Bash. It allows using comments when writing commands in the terminal.
# ~/.config/zsh/.zshrc setopt interactivecomments
Now you can write echo "this is printed" # this is not
Break lines when compiling ".md" to ".pdf" with pandoc
Add this to the YAML header options:
header-includes: - | ```{=latex} \usepackage{fvextra} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{breaklines,breakanywhere,commandchars=\\\{\}} ```
git add portions of code
$ git add --patch file/to_patch
$ git add --help ... patch This lets you choose one path out of a status like selection. After choosing the path, it presents the diff between the index and the working tree file and asks you if you want to stage the change of each hunk. You can select one of the following options and type return: y - stage this hunk n - do not stage this hunk q - quit; do not stage this hunk nor any of the remaining ones a - stage this hunk and all later hunks in the file d - do not stage this hunk nor any of the later hunks in the file g - select a hunk to go to / - search for a hunk matching the given regex j - leave this hunk undecided, see next undecided hunk J - leave this hunk undecided, see next hunk k - leave this hunk undecided, see previous undecided hunk K - leave this hunk undecided, see previous hunk s - split the current hunk into smaller hunks e - manually edit the current hunk ? - print help
robots.txt
User-agent: * Disallow: /
initramfs problem (ERROR: device /dev/mapper/xxx not found, skipping fsck)
After copying the “new” mkinitcpio.conf.pacnew
onto my “old” and running mkinitcpio -P
blindly, I got this error on restart, in the uncommon “emergency shell”:
/dev/mapper/vgarchlinux-root not found, skipping fsck
Too bad. I downloaded an arch.iso, ran my dd
command and ran this to solve:
$ mount /dev/mapper/vgarchlinux-root /mnt $ cd /mnt $ mount -t proc /proc proc/ # necessary to run mkinitcpio in chroot $ mount --rbind /sys sys/ # necessary to run mkinitcpio in chroot $ mount --rbind /dev dev/ # necessary to run mkinitcpio in chroot $ chroot . $$ mkinitcpio -P
And voilà!
Remove user from group
$ gpasswd --delete user group
Repair surf browser
First be sure to remove this old symbolic link:
$ sudo rm /usr/lib/surf/libsurf-webext.so
Then either:
$ WEBKIT_DISABLE_DMABUF_RENDERER=1 && surf
Or better:
$ cat /sys/module/nvidia_drm/parameters/modeset N # <- if N we need to enable $ echo options nvidia_drm modeset=1 | sudo tee /etc/modprobe.d/nvidia_drm.conf
Compress images
- png:
optipng image.png
- jpeg:
jpegoptim image.jpeg
hashcat start at a specific line in the wordlist
$ start_at_line=100 # whatever $ hashcat "hashex.txt" "wordlist.txt" -s ${start_at_line}
grep non-ASCII characters
$ grep -aP "[\x80-\xFF]" file.txt
Remove ^M at end of lines with vim
$ vim example.txt ... abc^M defg^M ...
To remove this, enter this in vim:
:e ++ff=dos :set ff=unix :wq
This forces vim to re-read the file forcing dos file format. Then we set it back to unix file format.
Perl REPL
rlwrap -A -pgreen -S"perl> " perl -wnE'say eval()//$@'
Reset email git
I had a problem when fetching a git repo I made on my LFS system: it had stored my real email address and I could not push it to GitHub (it is GitHub that warned me).
To solve this I had to set my “no-reply” address in the local repo:
git config user.email "blabla@users.noreply.github.com" git commit --amend --reset-author --no-edit git push
Perl modules à la Python
This is my current goto for modules in Perl.
Let's say I have main.pl
and my_module.pl
, where the module contains only functions.
We need to add 1;
at the end of the module so it is evaluated to true
when imported into main.pl
.
Then, in main.pl
, we import with this line : require "./my_module.pl"
.
It is important to know that functions called from my_module
into main
need parenthesis for their argument (even when they do not take any!).
For example: a_function_from_module();
.
Find a file from a vim buffer / switch buffers
:find filename
:b filename # (can be truncated)
git find all modif made to a file
git log -p --full-diff -- filename
Export STL as single object FreeCAD/Prusa Slicer
In FreeCAD:
- select every pieces of the object we want (Shift+left_click)
- select the Part menu
- then Part > Compound > Create compound
Now the object is a single object and can easily be imported in Prusa.
Plot from a file with gnuplot
$ cat file.txt 1.2 12.4 0.2 ... $ gnuplot $ plot "file.txt" with lines
Compile Common Lisp
$ sbcl --load hello.lisp * (load "hello.lisp") * (require "asdf") * (setq uiop:*image-entry-point* #'main) ; imagining that entry point is function "main" * (uiop:dump-image "hello.exe" :executable t)
Another (own) way to compile a lisp file:
$ sbcl --eval '(compile-file "my_script.lisp")' --eval '(quit)' $ chmod +x ./my_script.fasl $ ./my_script.fasl
Inspired from here:
$ sbcl * (compile-file "my_program.lisp") * (load "my_program.fasl")
Else, directly in a REPL: (load (compile-file "my_program.lisp"))
.
Revert to a commit
I had trouble with an update (package xp-pen-tablet
). It broke my pen tablet.
To revert back, I did :
git clone https://aur.archlinux.org/xp-pen-tablet.git cd ./xp-pen-tablet.git git revert --no-commit eef350b9b9bbdbc43ebd18e19f49b99869977d53 HEAD git commit makepkg -i
vim buffer to html
:runtime syntax/2html.vim
To which IP is a program connected to
netstat -n --program | grep "firefox"
Check kernel config in Arch Linux
zcat /proc/config.gz
Check if a streamer is live on Twitch
#!/bin/sh username="$1" curl -sL "https://twitch.tv/$username" | grep -o "isLiveBroadcast" && notify-send "$username is live!"
kernel make menuconfig with bad colors (change theme)
export MENUCONFIG_COLOR=mono
But… a better (?) alternative for me would be to use : make nconfig
! It has vim keybindings!
sed replace only if a match occured
I tend to use grep
and sed
together : to match, then to replace. But we can match with sed
:
echo "Hello world" | sed "/w/ s/world/planet/" file.txt Hello planet echo "Hello world" | sed "/z/ s/world/planet/" file.txt Hello world
This allow to replace only when a line matched /w/
. Note that it doesn't replace when trying to match “Hello world” with /z/
.
sed in place, but keep a backup of the original file
sed -i.bak "s/this/that/" file.txt
This will write the changes inside file.txt
, but also create a backup of it in file.txt.bak
.
Note that this work for multiple files too :
sed -i.bak "s/this/that/" file1.txt file2.txt file3.txt
To prefix the backup “extension” : -i'bak.*
will produce bak.file1.txt
, bak.file2.txt
, etc.
https://learnbyexample.github.io/learn_gnused/in-place-file-editing.html
Useful shortcuts emacs
C-x-;
= comment/uncomment lines. C-x-x
= reselect previous selection
Useful commands for git
rebasing when merging
After git merge source destination
some files might be in conflict. To solve this, modify conflicting files, then git add the_files
and git rebase --continue
.
rebasing for squashing
Squashing allows to compress multiple previous commits together. git rebase -i HEAD~n
note that -i
stands for interactive, and n
should be an integer corresponding to the number of commits that should be merged together. To know n
, check git log
and decide for yourself. After squashing commits (squash every commit under one), git push --force origin
will send it to your working distant branch.
send non-commited modifications to a new branch without changing the current one
That is especially useful when you made changes to a branch you did not want to change. git checkout -b newbranch
will create and send non-commited changes to newbranch
, and (!) will let untouched the current one!
Squash commits Github
previous_commits_to_squash=3 #check with git log to know how much should be squashed git rebase -i HEAD~"${previous_commits_to_squash}" # rename to "squash" for every commit under the first commit # merge comments git push --force origin
Find files from today with find
find <path> -daystart -ctime 0 -print
For 1 hour:
find <path> -cmin -60 # change time find <path> -mmin -60 # modification time find <path> -amin -60 # access time
Replace inside a matched pattern sed
echo apple1_blabbla | sed '/apple./ { s/apple1/apple2/g; }' echo apple1 | sed -e 's/\(a\)\(p*\)\(le\)1/\1\2\32/g' # use capture group!!
Secure website with password apache2
Add a user to existing file :
sudo htpasswd /etc/apache2/passwd/<FILE> <username>
else, Create file :
sudo htpasswd -c /etc/apache2/passwd/<FILE> <username>
then in /etc/apache2/sites-enabled/XXX.conf
append this to the website:
<Location /> AuthType Basic AuthName "Text to greet with :)" AuthBasicProvider file AuthUserFile "/etc/apache2/passwd/passwords" Require user john, patrick </Location>
ffmpeg compress mkv
ffmpeg -i input.mkv -vcodec libx265 -crf 28 output.mp4
Set up a pacman hook
See man 5 alpm-hooks
.
I had the following persistent problem : when rebuilding Linux initcpios (rebuilding initramfs images), my keyboard bindings (keymaps) where reset to default. This resulted in me not having the correct Super Key anymore. A way to solve that was creating a pacman hook in /etc/pacman.d/hooks/reload-keyboard-mappings.hook
with:
[Trigger] Type=Path Operation=Upgrade Operation=Install Target = boot/initramfs-linux* Target = usr/lib/modules/*/vmlinuz Target = usr/lib/initcpio/* Target = usr/lib/firmware/* Target = usr/src/*/dkms.conf [Trigger] Type=Package Operation=Upgrade Operation=Install Target = linux Target = linux-* Target = nvidia* [Action] Description=Setting keyboard layout after updating initramfs When=PostTransaction Exec=/home/me/.local/bin/remaps pacman_hook
Split this file into multiple given a pattern
csplit -z thisfile.md /^#\ / '{*}' # pattern = line starting with # followed by a space #csplit -z <FILE> <pattern> '{*}'
-z
removes empty output files.
Send/receive files securly
paru -S magic-wormhole wormhole send FILE # on the sender wormhole receive # on the receiver
Recursive globbing: the double asterisks (**)
Following the ArchWiki install guide, I saw that line : ls /usr/share/kbd/keymaps/**/*.map.gz
. What does the **
mean? It means matches filenames and directories recursively
. See here for small explaination.
Thus, if there are files like /usr/share/kbd/keymaps/amiga/xxxx.map.gz
and /usr/share/kbd/keymaps/mac/all/xxxx.map.gz
, the command ls /usr/share/kbd/keymaps/**/*.map.gz
will find those. Even if directories' trees are different.
Use ~/.profile, not ~/.bashrc
See here
By default, Terminal starts the shell via /usr/bin/login, which makes the shell a login shell. On every platform (not just Mac OS X) bash does not use .bashrc for login shells (only /etc/profile and the first of .bash_profile, .bash_login, .profile that exists and is readable). This is why "put source ~/.bashrc in your .bash_profile" is standard advice
Check if applications in Docker container are run as root
for c in $(docker ps -q); do docker inspect $c -f "{{ .Name }}:"; docker top $c | awk '{print $1, $2, $8}'; echo "--------------"; done
However I doubt how I should check for security concerns. Lots of website state that “docker should not be run as root” but they mean the process INSIDE the container as the docker daemon dockerd
is forced to be run as root. The problem for me is that it seems that some container need their init process to be run as root. So how do I know?
Check IPs connected to a server (check from the server)
Check the command netstat
(with the command watch
).
More precisely use to extract only connected IPs netstat -rntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
.
Some website recommand netstat -tulpn
.
To watch it in real time : watch netstat -tulpn
. That's where the watch
program shows its interest! For commands with awk, we need to put the command in quotes: watch "netstat -rntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n"
.
Share clipboard between sessions (SSH <-> system)
https://bbs.archlinux.org/viewtopic.php?id=261128
Yann : “Got it. I took a time to study the X clipboard fonctionnement. I did not expect that. xclip or xsel are X processes without X windows. When using them to set the clipboard, they are just X windows owning the clipboard in their own buffer. When another X application needs to past, the X server will call a specific function of the X application owning the clipboard to send the application buffer content containing the copy to the stdin of the pasting X application. Why not having this buffer in the X server directly? This forces every X application to have the necessary functions to support this feature. Also, when closing the X application owning the clipboard, it is gone. I could see the -loop X argument in action, when X pasts are done, the xclip process gets killed and there is no more clipboard available.”
Trilby : “Yann wrote:
This forces every X application to have the necessary functions to support this feature.
False. Only programs that intend to interact “copy” or “cut” data to a selction need these functions, and the functions are in fact quite simple; or at least they can be, some programs may have elaborate versions, but the basic requirements are trivial. Yann wrote:
Also, when closing the X application owning the clipboard, it is gone.
This is true. And this is one of the primary motivations of selection managers to close this gap. Yann wrote:
Why not having this buffer in the X server directly?
Because it is not known ahead of time how big the selection data might be. Selections are often short text strings, but they can be anything including images, or arbitrarily large binary blob data. If the X server were to own this data the server would either have to start with a very large static data space - which still might occasionally be too small, or it'd have to dynamically allocate space as needed.
Far more importantly than this, though, is that X was designed as a network protocol. Under the original design the client and server were often not the same machine - and it still can be used this way even if many of us now have them both on the same physical machine. If the server were to store selection data, then anytime an image program copied image data to a selection, it'd have to send that data over the network. Then every time another client pasted the image, it would retrieve all that data over the network. Despite the sender and receiver being on the same physical machine, the data would have to be sent to another machine and back. This is very bad design. This is compounded by the fact that every time there was a copy operation, the data would have to be sent, whether or not any client would ever request it for pasting.
Contrast this with the design that was chosen: an image program copies large image data and all that is sent to the server is a very short message saying “I've copied something”. Then when another client program requests a paste, all the xserver sends back is the id of the selection owner. The pasting client contacts the copy client and the data is transferred directly from client to client on the same machine.
This was a very good design choice by the X11 developers.
Use tmux (with watch) to create a coding environment
Look at the program watch
that can track changes in files, and run actions according to the user needs.
Force check at startup raspberry (and others?)
sudo touch /forcefsck; sudo reboot
Show python help for function from vim
Highlight a function name, then press Shift+k
.
Change manpager
The manpager visualizer is by default less
on my OS. But we can change it with export MANPAGER=nvim
for example. However it does not display text and colors properly. We should find a nice solution.
Remove impossible file to remove even in root
Check if lsattr /the/file
have a -i or -a. If yes : chattr -i /the/file
or chattr -a /the/file
. Then rm /the/file
.
Trouble with docker after weird reboot (or current outage)?
It appears this may be to invalid data written to /var/run/docker/libcontainerd/containerd/events.log when containerd shuts down improperly. We were able to move this file and restart docker, to recover our containers.
Watch devices (dis)connections live
sudo dmesg -w
Test RAM with memtester
sudo memtester 8192 5
8192 = 8Go, 5 = 5 tests.
Repair a broken zip file
I zipped a folder under Windows to find it is broken when trying to unzip on Linux.
Doing this seemed to work :
zip -F the_broken.zip --out repaired.zip #zip -FF the_broken.zip --out repaired.zip # only if the first line does not work unzip repaired.zip
pacman : useful info
Inside /etc/pacman.conf
:
# NOTE: You must run `pacman-key --init` before first using pacman; the local # keyring can then be populated with the keys of all official Arch Linux # packagers with `pacman-key --populate archlinux`.
Firefox : see the most consumming opened webpages
about:processes
dino (xmpp client) dark theme
To enable dark theme in the last version (0.4) I had to execute this in a terminal :
gsettings set org.gnome.desktop.interface color-scheme 'prefer-dark'
git remote when ssh on different port than default (22)
Add this to .ssh/config
:
Host 192.168.0.69 Port 696969
It works! Else, we should change the remote in ALL repos… with:
git remote add origin ssh://git_user@git_server:git_port/PATH/TO/REPO
Rootkit hunter
I discovered a tool rkhunter
to track potential rootkits on a system.
rkhunter --update rkhunter --propupd rkhunter -c # to check
Ignore changes to a file when commiting to Github
When I'm in a branch where I only want to merge a file from another branch but can't checkout until commiting change.
See : https://stackoverflow.com/a/18508527
git stash git checkout <another-branch> git stash apply git add <one-file> git commit git stash git checkout <original-branch> git stash apply
Undo a git stash
git stash pop
Will unstash the changes.
If you want to preserve the state of files (staged vs. working), use :
git stash apply --index
Use man -Tpdf to output manual in PDF
Example with ls
man -Tpdf ls > ls.pdf
Avoid xorg from being killed with "Ctrl+Alt+Backspace"
I added the file 147-no_terminating_binding
into /usr/share/X11/xorg.conf.d/
:
#/usr/share/X11/xorg.conf.d/147-no_terminating_binding Section "ServerFlags" Option "DontZap" "True" EndSection
Variables' existence in old bash versions (before 4.2)
macOS runs by default an old version of bash (~3.2 apparently). The option -v
has only been implemented in bash 4.2, so to convert new bash versions to old one for checking variables' existence :
-v = ! -z ${...+x} ! -v = -z ${...+x}
Reset a branch on Github (remove history)
The idea is to copy the main branch, then remove it and rename. This will show only one commit in the new branch.
git checkout --orphan tmp-master # create a temporary branch git add -A # Add all files and commit them git commit -m 'Add files' git branch -D master # Deletes the master branch git branch -m master # Rename the current branch to master git push -f origin master # Force push master branch to Git server
Reduce a video size without loss of quality
https://unix.stackexchange.com/questions/28803/how-can-i-reduce-a-videos-size-with-ffmpeg
This answer was written in 2009. Since 2013 a video format much better than H.264 is widely available, namely H.265 (better in that it compresses more for the same quality, or gives higher quality for the same size). To use it, replace the libx264 codec with libx265, and push the compression lever further by increasing the CRF value — add, say, 4 or 6, since a reasonable range for H.265 may be 24 to 30. Note that lower CRF values correspond to higher bitrates, and hence produce higher quality videos.
ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4
As of today, I don't manage to make it work using the GPU… So I use this command to convert from mkv (h264) to mp4 (h264_nvenc) :
ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -c:v h264_nvenc output.mp4
⇒ 40% reduction (~11X speed) [ from 1300mo to 803 ]
Try an older version of bash with Docker
I needed to try bash-3.2
that is the default shell on macOS. Thanks to Docker it's simple :
docker run -it bash:3.2
Change Firefox URL bar size
about:config
, devp
, layout.css.devPixelsPerPx
⇒ from -1
to 1.5
(or less or more)
Recursively download a website with wget
wget --level=inf \ --recursive \ --page-requisites \ --user-agent=Mozilla \ --no-parent \ --convert-links \ --adjust-extension \ --no-clobber \ -e robots=off \ https://indulgent.website.url/
Check size and sort by size
du -sh -- * | sort -h
Mount an ISO
mount /path/to/image.iso /mnt/iso -o loop
Fonts in ArchLinux
fc-list # get list fc-match sans # find matches with "sans" fc-match monospace # find monospace fonts fc-conflist
Config file: ~/.config/fontconfig/fonts.conf
Backup server remotely
Don't forget to change USER
, SERVER_URL
:
ssh USER@SERVER_URL "sudo -S dd if=/dev/mmcblk0 bs=64k status=progress | gzip -1 -" | dd of=rpiimg.gz bs=64k
Use server as a proxy
ssh -C2qTnN -D 8080 USER@SERVER_URL
Then in the browser: create a SOCKS proxy with url 127.0.0.1:8080.
Print from command line
lpr file.txt -P PRINTER_NAME #tab to find printer