This product includes software developed by Inferno Nettverk A/S, Norway.
information bucket
application* avatar
* usage analytics (browsing)
* trigger management* cb_storagedata
* supports generic storage solution for datasudo apt install npm;sudo npm install pm2 -g;
cd ~;npm install izy-idman-tools;mv node_modules/izy-idman-tools .;cd ~/izy-idman-tools;
npm install;sudo apt-get -y install cmake python3-pip vim
brew install python@3
brew install portaudio$IZYPYTHONROOT/pip3 install dlib numpy opencv-python -vvv
/ See the instructions under the GPU section for installing Python tensorflow bindings /$IZYPYTHONROOT/pip3 install matplotlib pyyaml tqdm imageio scikit-image scikit-learn pandas imageio-ffmpeg
$IZYPYTHONROOT/pip3 uninstall tensorflow protobuf
$IZYPYTHONROOT/pip3 install tensorflow protobuf$IZYPYTHONROOT/pip3 install mediapipe -vvv
nvidia-smi
/ You should see something like below /
/ dell XPS /
Driver Version: 472.19 and CUDA Version: 11.4/ ASUS /
Driver Version: 510.47.03 CUDA Version: 11.6Follow the steps below (this is from nvidia-container-toolkit):
[does not work] sudo apt-get install -y nvidia-open
[works] sudo apt-get install -y cuda-drivers~/nvidia_setup.txt
for future reference in case the driver breaks due to auto updates, etc. See below:modinfo nvidia >> ~/nvidia_setup.txt
filename: /lib/modules/5.15.0-117-generic/updates/dkms/nvidia.ko
alias: char-major-195-*version: 560.28.03
find /lib/modules/$(uname -r) | grep nvidia >> ~/nvidia_setup.txt
/lib/modules/5.15.0-117-generic/kernel/drivers/net/ethernet/nvidia
/lib/modules/5.15.0-117-generic/kernel/drivers/net/ethernet/nvidia/forcedeth.ko/lib/modules/5.15.0-117-generic/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko
/lib/modules/5.15.0-117-generic/kernel/drivers/video/fbdev/nvidia/lib/modules/5.15.0-117-generic/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko
/lib/modules/5.15.0-117-generic/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko/lib/modules/5.15.0-117-generic/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko
/lib/modules/5.15.0-117-generic/updates/dkms/nvidia.ko/lib/modules/5.15.0-117-generic/updates/dkms/nvidia-peermem.ko
/lib/modules/5.15.0-117-generic/updates/dkms/nvidia-uvm.ko/lib/modules/5.15.0-117-generic/updates/dkms/nvidia-drm.ko
/lib/modules/5.15.0-117-generic/updates/dkms/nvidia-modeset.ko sudo nvidia-smi >> ~/nvidia_setup.txt+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.28.03 Driver Version: 560.28.03 CUDA Version: 12.6export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}
~/cuda-samples/Samples/1_Utilities/deviceQuery/deviceQuerydocker run hello-world
Finally, make sure all pieces fit together, i.e.
docker run --gpus all nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu20.04 nvidia-smi
If you are running into issues after a successful installation,
nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.
sudo nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Make sure the driver is installed for your kernel:
find /lib/modules -iname nvidia
find /lib/modules/$(uname -r) -iname nvidiadpkg -l | grep nvidia
modinfo nvidia-520 If they are present but not loaded this is most likely caused by a kernel or bios auto update:mkdir tmp; cp /var/log/apt/.gz tmp/;cd tmp;gunzip ;grep -B 4 -A 4 -nH $(uname -r) *;
/ You should get the following /
/ dell XPS /
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==modalias : pci:xxxxxxxx
vendor : NVIDIA Corporationdriver : nvidia-driver-470-server - distro non-free
driver : nvidia-driver-510 - distro non-free recommendeddriver : xserver-xorg-video-nouveau - distro free builtin
/ Asus (NVIDIA GeForce RTX 3060 Laptop GPU) /
== /sys/devices/pci0000:00/0000:00:06.0/0000:01:00.0 ==modalias : pci:v000010DEd00002520sv00001043sd000016E2bc03sc00i00
vendor : NVIDIA Corporationdriver : nvidia-driver-510 - distro non-free recommended
driver : nvidia-driver-470 - distro non-freedriver : nvidia-driver-470-server - distro non-free
driver : xserver-xorg-video-nouveau - distro free builtin If the output includes a recommended version, then simply use: / use the following to install the recommneded driver for your hardware and reboot /sudo ubuntu-drivers autoinstall
sudo reboot/ optional /
sudo apt install nvidia-cuda-toolkitnvcc --version
Afterwards, you can use the official tenstorflow installation link and pick a CUDA version that is compatible wity your hardware, below are some examples:
/ dell ubuntu /
pip3 install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html / dell windows Driver Version: 461.92 CUDA Version: 11.2 // Menu Pick: LTS (1.8.2), CUDA 11.1 /
install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html / Lenovo: Driver Version: 472.19 CUDA Version: 11.4 /pip install torch==1.10.2+cu102 torchvision==0.11.3+cu102 torchaudio===0.10.2+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html
To make sure that PyTorch instllation is CUDA enabled, try running the following Python script: / check CUDA compatibility by /import torch
torch.cuda.current_device()torch.cuda.getdevicename(0)
If you are using NVIDIA GeForce RTX 3060, you may see this message:
NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm37 sm50 sm60 sm61 sm70 sm75 compute37. If you want to use the NVIDIA GeForce RTX 3060 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
Please refer to this link for further instructions: https://forums.developer.nvidia.com/t/rtx-3060-can-support-cuda-with-pytorch/178078/2
Utilizing the GPU will can improve the performance by ten folds, as an example:
with CPU: 210 (226s) 2.8 it/swith GPU: 19 (29s) 31 it/s
If you run into booting issues after installing the Nvidia driver on AMD systems:
NVRM cpuidInfoAMD: Unrecognized AMD processor in cpuidInfoAMD
boot into recovery mode by pressing ESC, drop into shell and then
sudo apt-get purge nvidia-*
Make sure your system and related binaries are up to date:
sudo apt install --reinstall linux-image-generic
sudo apt install --reinstall linux-headers-genericsudo apt remove --purge '^nvidia-.';sudo apt remove --purge '^libnvidia-.';
sudo rm /etc/X11/xorg.conf | true;sudo rm /etc/X11/xorg.conf.d/90-nvidia-primary.conf | true;sudo rm /usr/share/X11/xorg.conf.d/10-nvidia.conf | true;sudo rm /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf | true; sudo rm /etc/modprobe.d/nvidia-kms.conf | true;sudo rm /lib/modprobe.d/nvidia-kms.conf | true;sudo apt update -y && sudo apt full-upgrade -y && sudo apt autoremove -y && sudo apt clean -y && sudo apt autoclean -y
Install the driver
sudo ubuntu-drivers autoinstall
check which driver is installed by
dkms status
nvidia, 520.56.06, 5.15.0-52-generic, x86_64: installed
/ to get the path and other infor related to the device /
modinfo nvidiasudo modprobe nvidia
modprobe: ERROR: could not insert 'nvidia': No such device sudo nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. / if not installed /sudo dkms remove nvidia/(version number)
sudo dkms install nvidia/(version number)CTIA is the most commonly used. Note that some manufacturers call the TRRS jack an audio combo jack (ASUS).
To seperate the signals the device is marketed as "Headset Splitter Adapter" or "Headset Adapter Y Splitter" or "Audio Splitter".
When the devices arent equipped with audio jacks you can utilize an external sound card
* make the scripts available in your cli environment:
/ put this in your shell ~/.zshrc ~/.bash_rc /
You can add the tool to your PATH environment variable, i.e.export PS1="%d % : "
export PATH=$PATH:~/izy-idman-tools; * add~/izyware/izy-idman-tools
to Docker -> Preferences... -> File Sharing.
This will allow you to run side by side Firefox session and run your browser based applications (vscode in the tunnel mode), etc.
/Applications/Firefox.app/Contents/MacOS/firefox --profile /var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.9 &
/Applications/Firefox.app/Contents/MacOS/firefox --profile /var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.17 &Note that chrome will open in the same session even if the profile-directory is specified
export CHROME_PATH=/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
$CHROME_PATH --profile-directory=/var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.dzm4bVTL $CHROME_PATH --user-data-dir=/var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.dzm4bVTLid
sudo launchctl asuser 502 open $CHROME_PATHA better alternative is to use XQuartz. First create a new xterm:
xterm -fa Monaco -fs 14 -sb -sl 1000 -fg blue -bg white
Then fix the cut/paste on macOs by referring to https://superuser.com/questions/517878/why-cant-i-paste-into-xterm-xquartz. Selecting text in xterm, and hitting ⌘+C to copy, should work, if you have turned on "Enable key equivalents under X11" under the "Input" tab, and "Update Pasteboard when CLIPBOARD changes" under the "Pasteboard" tab. For Paste:
~/plat/p/apps/aws/remoteaccess/runx.sh . "~/openchrome.sh"
You can use the shell inside a docker container to allow multi tasking across different user profiles.
`
izyaws.sh
`
the following directories will be available inside the shell:
/izycommondir -> $IZYSHELL_COMMONDIR
/izyhostdir -> $(pwd)[shadow admin, ofc session]
data-collection/app/device-manager/list: find the idtoken for the profilefrom the legacy izyware, kick off a taskrunner item with the following parameters
runtimeid=idtoken
parameters=statecapture5.5 this can take upto 15 minutes. [workstation]ifconfig | grep 10.42
[host]
izy.devops "ssh?runx" . "~/openchrome.sh"
[host chrome]
Login as shadow admin (make sure it is in the same tab -- or automation wont work)install the toolbar
if bucket already assigned, then go to data-collection/app/information-bucket, find the profile > device switch > click > ok.if bucket not already assigned, go to data-collection/app/device-manager/list: find the profile
if not already, set bucketId from UI (If not bucket is assigned, then goto payloads tab find the payload, click on row > popup > click switch)sudo dpkg -i google-chrome-stablecurrentamd64.deb
sudo apt-get install -fgoogle-chrome --version
google-chrome --version
wget https://storage.googleapis.com/chrome-for-testing-public/128.0.6613.84/linux64/chromedriver-linux64.zipunzip chromedriver-linux64.zip;
rm chromedriver-linux64.zip;chromedriver --url-base=/wd/hub
The Socks clients can then either initiate the connections from the server or initiate the connections from a thirdparty box (3B) in your network (hence the name double dipping or DD). In both scenarios, the client will connect to socks5://$IZYSOCKSSERVERIP:$IZYSOCKSSERVERPORT
For the DD scenario to work, you will need to:
vim /etc/ssh/sshd_config
set GatewayPorts to yessudo systemctl restart ssh.service
Also sometimes on the IAS, there will be dangling sshds attempting to processing frowarding requests on $IZYSOCKSSERVERPORT. Make sure they are cleared:
ssh -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP
sudo lsof -i -P -n | grep $IZYTCPACCESSPORT and find all of them and kill them.On the IAS make sure that the binding address for the port is 0.0.0.0 (not 127.0.0.1)
netstat -tupln | grep $IZYTCPACCESSPORT
0.0.0.0:19999System Preferences > Sharing > Turn remote logging on
/ Enable Socks Server on Local /
ssh -v -D $IZYTCPACCESSPORT -C -N $USER@localhostThen "publish" the local port on the remote server:
/ full background /
ssh -v -N -f -R $IZYTCPACCESSPORT:localhost:$IZYTCPACCESSPORT -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP / Interactive console /ssh -v -g -R $IZYTCPACCESSPORT:localhost:$IZYTCPACCESSPORT -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP
This is the preferred method because it will allow accessing and monitoring the VNC session from the xcast app web interface.
The solution is to use the X virtual framebuffer.
x11vnc -display :1 &
export DISPLAY=:1;vncviewer client_ip:5900 &
Create ~/vncpublish.sh script on the Ubuntu hosting for forwarding the VNC port
rm fifo0
rm fifo1mkfifo fifo0 fifo1
nc -l UBUNTUUSERIP 10000 > fifo0 < fifo1 &nc CLIENTMACHINEIP 5900 < fifo0 > fifo1 &
Then use the vnc script
cli/remoteaccess/vnc.sh .
VNC is based on the standard RFB protocol and generally speaking most VNC client should work. Notice that MacOS implements Apple Remote Desktop (ARD) based on VNC and using version "3.889" of the RFB protocol, which is non-standard. See https://www.tenable.com/blog/detecting-macos-high-sierra-root-account-without-authentication and https://cafbit.com/post/appleremotedesktop_quirks/ for more info.
The following tools can be used
remmina --server=ip --protocol=vnc
vncviewer ip:5900 &
IZYSHELL_COMMONDIR
: Host path mapping to shell /izycommondirIZYDEFAULTUSER
: Specifies an ID associated with a user or group.IZYAUTHORIZATIONTOKEN
: Specifies the session key for the cloud access APIs.You can use a single pipe to push stdout result into the clipboard:
izyaws.sh eugene whoami > pbcopy
To pick up content from files, etc.
pbcopy < izyshell.sh eugene
To grab contents from clipboard:
pbpaste > mydata
For file sharing, Apple menu > System Preferences, then click Sharing and then use "Share files and folders using SMB.". This will allow collaboration with Windows and Mac users.
pbcopy
with clip
.
net user
To copy files across Mac and Windows, you can share a folder from windows, and then mount the folder usingFinder > Go > Connect to server
type smb://windows_ip/folder Windows 10+ ships with SSH, to enable it use Settings > Apps > Apps and features > Optional features > add Featurerun powershell as administrator
Get-WindowsCapability -Online | ? Name -like 'OpenSSH.Server*'Get-Service -Name ssh
Start-Service sshdSet-Service -Name sshd -StartupType 'Automatic'
Start-Service ‘ssh-agent’Set-Service -Name ‘ssh-agent’ -StartupType 'Automatic'
pbcopy
with xclip
.
* Amber Alerts
* Galaxy S9: Settings> Connections > More Connection Settings > Wireless Emergency Alerts > Settings > Alert Types* Activate
* Create* Meet In
* Meet Out* Phone Output
* Activate
* Import profile. Or optionally create a new profile.* Phone Output > AUAudioFile Player > 10 Band HQ (1X max, progression towards base - otherwise it will become the telephone effect), AUNewP (-233,8,ON), AUMultiBComp () > Volume (79, 2X) > PeakRMS > Meet In
* Profile 2* Branch Out Meet Out with Low Volume for Smoke Tests
* Install latest nodeJs and izy-idman-tools
* transfer the AI libraries mkdir -p ~/izyware/izy-idman-tools/id;rsync ....
open ~/izyware/izy-idman-tools/id;/ switch finder to columns view, create alias for izyware and mode alias to Desktop /
mkdir -p ~/izyware/izy-idman-tools/ai;
git clone https://github.com/davisking/dlib-models.git ~/izyware/izy-idman-tools/ai/dlib-models;bunzip2 ~/izyware/izy-idman-tools/ai/dlib-models/shapepredictor68facelandmarks.dat.bz2
* Make sure power settings are setup correctly post restart or crash
sudo systemsetup -setcomputersleep Never
* System Preferences > Sounds
* Check "Show volume in menu bar"* Set the input volume for internal Microphone to maximum
* System Sound Default Output > Meet Out* System Sound Default Input > Meet In
* Make sure the screen font size and object sizes are large enough that can be user for side by side vieweing and interacting.* Use data collection features to setup session and messaging.
* Privacy > Only allow calls from contacts
* AV > Mic > Meet Out* AV > Automatically Adjust Micophone Settings (Off)
* AV > Noise Cancellation (Off)* AV > Automatically Adjust Settings (OFF)
* AV > Speakers > Phone Output* Calling > Advanced > Answer Incoming Calls Automatically
* Calling > Show call window when in background (yes)* Pick "Only using Virtual Camera"
* Default canvas resolution will be set to your screen resolution (1920x1080). Reduce that to 720p* Image customization, use the following sequence of commands
find / -iname placeholder.png 2> null | grep png | read PLACEHOLDERIMAGEPATH; dirname $PLACEHOLDERIMAGEPATH | read PLACEHOLDERIMAGEPATH;echo copying to $PLACEHOLDERIMAGEPATH;sudo cp placeholder.png $PLACEHOLDERIMAGEPATH;ls -l $PLACEHOLDERIMAGEPATH;
* profile basic01
* Scene configuration* Scene > Sources > izyware/izy-idman-tools/id/...
* Scene 2 > Sources > izyware/izy-idman-tools/id/...* For scene transitions use Fade.400ms
* Source configuration:* Loop option is enabled
* All the other options are disabled. In particular restart should be disabled, otherwise the switching will not work well.* Always signed in
* Bookmark the personal room link* Always have chat window open
* Settings > Share Screen > Window Size when sharing: Maintain current size* Settings > Share Screen > When I Share > ... : Show all options
* Settings > Mic > Meet In* Settings > Speaker > Meet Out
* Background* izyware/izy-idman-tools/id/...
* Use more modular pieces and replace long components with smaller attachable ones.
* More space required in the backend for screen* Top bar and cross bar are required for stability
* camera setup* external cameras are preffered to phone camera
* the camera angle is already setup so it will not introduce stress into the system and no dismantling.* computer internal webcam can not be easily tiled, mounted, or their distance to target adjusted.
* the final angels and settings can be tested on an exetrnal subject before deployment* Setup a basic access point with blank screen
/ disable UEFI Secure Boot in the BIOS -- This can cause problems in installing NVIDIA drivers /
/ find all the computers on network /
/ the netgear web portal will show the registered machines with their ip /http://192.168.1.1/ (admin,password)
/ make sure SSH is enabled /
/ If you get lock errors due to update process, reboot and retry /sudo apt-get install openssh-server
sudo systemctl enable sshsudo systemctl start ssh
/ if you dont have a test user with sudo privilage add it /
useradd testusermod -aG sudo test
/*
refer to izy.devops section for create key-pair for easy sshing into the boxand then ssh into the box
*/izy.devops "ssh?shell" .
/ ssh into the box // install tools /
sudo apt -y install xvfb x11vnc xdotool xterm arp-scan net-tools vim tigervnc-viewer imagemagick nodejs npm ffmpeg curl; / turn off all auto updates. Auto updates can cause issues with custom drivers such as NVIDIA /sudo vim /etc/apt/apt.conf.d/20auto-upgrades
/ set the machine name AS SEEN BY THE CLIENT WORKSTATION pick a name consistent with the reported ISP-id, reboot required /
sudo vim /etc/hostname / some NVIDIA hardware may cause boot issues, so disable splash screen /sudo vim /etc/default/grub
GRUBCMDLINELINUX_DEFAULT=""sudo update-grub
/ connect to gnome desktop and vnc to host:5900 // NOTE: during the installation of Ubuntu, pick the option where the user is automatically logged-in or this wont work /
export DISPLAY=:0;x11vnc; / enable internet connection sharing (ICS) /If using wired:
Network > Wires (Select the interface you would like to share) > Settings (Cogwheel) > IPV4 > Share to other computers If using Wireless Hotspot. uses the network manager service from GNome Desktop:sudo vim /etc/NetworkManager/system-connections/Hotspot
nmcli -p -f general,wifi-properties device show wlan0/ block service ports (ssh, etc.) from being accessed /
sudo ufw deny from 10.42.0.0/24 to anysudo ufw allow from 192.168.1.0/24 to any
sudo ufw show addedsudo ufw enable;sudo ufw status numbered
/ if you need to allow only a single port, make sure to add to the top /sudo ufw insert 1 allow proto tcp from 10.42.0.0/24 to 10.42.0.1 port 8082
/ to get the client ip - grab enXXXXXXXX from ifconfig /
sudo arp-scan --localnet -I enXXXXXXXXsudo service gdm stop
* start the kinesis app in dev mode
clear && printf '\e[3J';~/plat/p/izyidman/5/cli/remoteaccess/runapp.sh ~/plat/p/servers/
mkdir -p ~/beta;rsync -av --exclude="node_modules/*" ~/plat/p/izyidman/5/ ~/beta;cp ~/beta/queryObject.
cd ~/beta/xcast;pm2 start apps/kinesis.js
* enable audio: when the gui mode gets turned off, you may have trouble accessing audio as a non-root user. The issue could be related to pulseaudio server not allowing access to your user. Typically, pulseaudio server runs as the main admin (non root). Every user that belongs to the audio group is able to access the pulseaudio server.
sudo usermod -aG audio test
sudo rebootsudo alsactl restore
cat /proc/asound/cards
amixer -camixer -c
amixer -c 0 set '
amixer -c 2 set 'Headphone' 90%
defaults.pcm.card 1
defaults.ctl.card 1aplay ~/beta/xcast/data/test-48.0-16-mono.wav
while [ 1 ] ; do aplay ~/beta/xcast/data/test-48.0-16-mono.wav;donesudo arecord -l
sudo arecord -f S16_LE -d 10 -r 16000 --device="hw:3,0" /tmp/test-mic.wav* enable virtualcamera service
sudo reboot;
sudo apt-get install ubuntu-restricted-extras;
sudo v4l2-ctl --list-devices;
sudo v4l2-ctl -d8 --list-formats-ext./service/virtualcamera/feedcamerartsp.sh ubuntu-xxx
* (Optional) Steps for further optimization
/ enable auto login -- requires a reboot /
sudo vim /etc/gdm3/custom.confxinput list / grab the ID for touchscreen buggy Atmel Atmel maXTouch Digitizer /
xinput --disablexgamma -gamma 1.500
/ brightness and resolution /xrandr | grep " connected" | cut -f1 -d " "
xrandr --output STRINGFROM_ABOVE --brightness 1.1xrandr --output STRINGFROM_ABOVE --mode 1360x768
/ after some inactivity display brightness drops and we see the lines /
to send keys and mouse to active app:
xdotool type "Hello world"; disable auto updates: https://linuxconfig.org/disable-automatic-updates-on-ubuntu-20-04-focal-fossa-linux
; TIP: list these by gsettings list-recursively | grep -i "suspend\|sleep"
; set desktop background to gray (note black will reflect off of screen)
gsettings set org.gnome.desktop.background picture-options 'none'gsettings set org.gnome.desktop.background primary-color '
gsettings set org.gnome.desktop.lockdown disable-lock-screen 'true'
gsettings set org.gnome.desktop.screensaver lock-enabled false; wakelock issues at 4 minutes the screen resolution gets shitty
; disable automatic brightnessgsettings set org.gnome.settings-daemon.plugins.power ambient-enabled false
; Power Saving > Dim Screen When Inactive (Disable)gsettings set org.gnome.settings-daemon.plugins.power idle-dim false
; Power Saving > Blank Screen (Never)gsettings set org.gnome.desktop.session idle-delay 0
; Suspend & Power Button; Disable Notification: "Computer will suspend very soon because of inactivity"
gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'nothing'gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'nothing'
* internal microphones and speakers.
* wireless antenna* cover internal cam
* cover all display with black, otherwise it will cause glare on the tablet and its camera will it up from the tablet* Settings
* displays* brightness=minimal
* automatically adjust brightness: No* True Tone: No
* reolution: scaled, larger text* Battery
* Power Adapter* Turn off display after = never
* Prevent your Mac from automatically sleeping when display is off* Use 720p frame size.
* laptop: head to frame ratio of 1/4-3/16 Width, 1/2 - 1/3 is the standard for laptop shooting.* external: head to frame ratio of 1/8 Width, 1/4-1/5 height is the standard for laptop shooting.
* movement generation
* mechanical devices (chair) create smoother animations and more predicable picture quality.* AI mesh detection current detects the following human gestures.
* Hello* GoodBye
* Smile* Thinking
* Scratch Face, Head* Look down
* Look away* Get up and leave
* Come in and Sit* Raise Hand
* Waive to people* Look at camera
* Point at camera* Hand Gestures
* ZM > Settings > Test Audio: Should hit middle green bars
* Turning off VC should show transparency (no OBS)
* The sync loop should respond
* The audio and images should fully work.npm run testvideo
npm run testvideo queryObject.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE queryObject.inputVideoCapturePath $IZYANIMATESOURCEFRONTDEVICEID queryObject.videoGenerationMode test|direct|fomm|mixer1clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,audiooutput@peer,jsonio@peer,virtualcamera@peer
There are several recommended tools available for this task:
sudo apt-get install ubuntu-restricted-extras;
sudo apt-get install libavcodec58 ffmpeg;The oldest protocol for streaming is RTSP and RTMP. They have a few seconds delay and are not web friendly. The quickest protocol is WebRTC. For an in depth analysis refer to wowza-streaming-protocols.
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 -p 1935:1935 -p 8888:8888 aler9/rtsp-simple-server
export RTSPURL=rtsp://localhost:8554/mystream
To publish content to the RTSP service
/ on linux /
sudo ffmpeg -f x11grab -framerate 15 -video_size 640x480 -i :0.0 -f rtsp $RTSPURL / macos capture camera /ffmpeg -f avfoundation -framerate 15 -video_size 640x480 -i 1:0.0 -f rtsp rtsp://192.168.1.12:8554/mystream
To consume the content using vlc:vlc $RTSPURL
for gstreamer, to debug and inspect that everything is working useplaybin
:
/ make sure mpeg4 and dependencies are installed /
gst-launch-1.0 -v playbin uri=$RTSPURL uridecodebin0::source::latency=300
To consume the content:
/ to show on desktop /
gst-launch-1.0 rtspsrc location=$RTSPURL latency=300 ! decodebin ! clockoverlay ! autovideosink/ to route to video device /
rtspsrc location=$RTSPURL ! decodebin ! videoconvert ! v4l2sink device=/dev/video0 As discussed earlier, RTSP may introduce delays but that can be optimized by using gstreamer at the source (as opposed to ffmpeg) and fine tuning the configuration for the latency and frame buffer size. please refer to rtsp-stream-from-ip-camera-delay-in-input-in-preview./ ffplay will listen for udp packets /
ffplay -fflags nobuffer -flags low_delay -framedrop -strict experimental udp://ip:port / generate the stream w/ ffmpeg from your webcam /ffmpeg -f avfoundation -list_devices true -i ""
ffmpeg -f avfoundation -framerate 30 -i "1" -f mpeg1video -b 200k -r 30 -vf scale=640:360 udp://192.168.1.14:1234ffplay rtp://127.0.0.1:1234
ffmpeg -re -f lavfi -i aevalsrc="sin(4002PI*t)" -ar 8000 -f mulaw -f rtp rtp://127.0.0.1:1234* spreed: https://www.fosslinux.com/45051/how-to-install-spreed-webrtc-server-on-ubuntu.htm
v4l2-ctl --list-devices
HD Webcam C615 (usb-....):
/dev/video2/dev/video3
/dev/media1/ v4l2loopback is the virtual camera on linux /
sudo apt-get install v4l2loopback-dkms v4l2loopback-utils;/ make sure the user is part of the video group /
sudo usermod -aG video $USER;sudo reboot; / create /dev/video0 /sudo rmmod v4l2loopback;sudo modprobe v4l2loopback videonr=0 exclusivecaps=1 card_label="vcam0";
sudo v4l2-ctl --list-devices;sudo v4l2-ctl -d0 --list-formats-ext
/ create two devices /sudo rmmod v4l2loopback;sudo modprobe v4l2loopback devices=2 videonr=0,1 exclusivecaps=1,1 card_label="vcam0,vcam1";
/ This is needed or python will not work /
gst-launch-1.0 -v videotestsrc ! clockoverlay ! v4l2sink device=/dev/video0/ Do not use cheese or ffplay. They cause trouble. Use gst instead /
export DISPLAY=:0;gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! xvimagesink / For screen calibration /mimeopen -d ~/stillimage.png
You can then pipe various sources to virtual camera device using any of toolchains described above. We recommend using gst:
gst-launch-1.0 ximagesrc startx=0 starty=0 endx=640 endy=480 use-damage=0 ! videoconvert ! v4l2sink device=/dev/video0
If you like to use ffmpeg see ffmpeg-send-video-to-dev-video0-on-ubuntu:
ffmpeg -f x11grab -framerate 15 -video_size 640x480 -i :0.0 -f v4l2 /dev/video0
If you are interest making the opencv python output available as a virtual camera source to the system refer to virtual-camera-for-opencv-using-v4l2loopback.
gst-inspect-1.0 faceblur
That plug-in come with
apt list --installed | grep gstreamer
/ generate test videos /
ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 testsrc.mpgffplay testsrc.mpg
~/ffmpeg -i recording.mov -vcodec h264 -acodec mp2 recording.mp4
/ extract audio /
~/ffmpeg -i ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/video.mp4 -q:a 0 -map a ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/audio.wav / list all devices on mac /~/ffmpeg -f avfoundation -list_devices true -i ""
/ save output of device 1 to test-output.aiff /
~/ffmpeg -y -f avfoundation -i ":1" newfilename.ogg / record audio and video /~/ffmpeg -f avfoundation -video_size 640x480 -framerate 30 -i "0:0" -vcodec libx264 -preset veryfast ~/tmp/raw.mp4
/*
croptrim, cut the middle of the video
PTS = Presentation TimeStamp, expressed in seconds (supports partial seconds)*/
export FFMPEGINPUTVIDEO=~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATE_WORKSPACE/datasamples/1/video.mp4
~/ffmpeg -i $FFMPEGINPUTVIDEO -filter_complex "[0:v]split=2[copy1][copy2],
[copy1]trim=00:05,setpts=PTS-STARTPTS[part1],[copy2]trim=112:120,setpts=PTS-STARTPTS[part2],
[part1][part2]concat=n=2[out]" -map "[out]" ~/tmp/raw.mp4 / extract audio /~/ffmpeg -i ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/video.mp4 -q:a 0 -map a ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/audio.wav
/ audio conversion /
~/ffmpeg -i data/stero.wav -ac 1 data/mono.wav~/ffmpeg -i data/test-44.1-16-mono.wav -ar 48000 data/test-48.0-16-mono.wav
~/ffmpeg -i data/test-48.0-16-mono.wav -c:a pcm_s32le data/test-48.0-32-mono.wav~/ffmpeg -i audio/test-48.0-16.wav -acodec libvorbis audio/test-48.0-16.ogg
~/ffmpeg -i file.jpeg source.png
./ppm.sh "analyze?analyzeVideo" --queryObject.imageRepoPath=$izyAnimateWorkspacePath/frames--queryObject.inputVideoCapturePath=~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATE_WORKSPACE/datasamples/izyanimate-1641837127-w880xh580.mp4
./ppm.sh "animate?animate" --queryObject.imageRepoPath=$IZYANIMATEROOT/frames/$IZYANIMATEWORKSPACE--queryObject.inputVideoCapturePath=$IZYANIMATEROOT/datasamples/$IZYANIMATEWORKSPACE.mp4--queryObject.maxFrames=10000--queryObject.frameSize=400--queryObject.waitForKeyboardOnEachFrame=True
./ppm.sh "animate?animate" --queryObject.imageRepoPath=$IZYANIMATEROOT/frames/$IZYANIMATEWORKSPACE--queryObject.maxFrames=10000--queryObject.inputVideoCapturePath=2--queryObject.frameSize=400--queryObject.cropInput=True--queryObject.resultsPath=$IZYANIMATEROOT/results/$IZYANIMATEWORKSPACE
To list audio devices
npm run getaudiodevices
For dependencies:
naudiodon
has a dependency on node-gyp
which will require command line tools to be installed. so make sure to do:xcode-select --install
* M1 family does not have display scaling to 1280x720 and will slow down the system.
* Commandline automation for mixed architecture apps will fail.In contrast, smartphones have camera with much higher resolution and quality. Screen recordings can be megapixels (like 2000x3000).
As of 2021, the virtual camera provided in broadcasting software offers @1280x720 resolution / 30 FPS.
* never try customizating or changing things during a live run. the audio feedback saves.
* faraday cage is not easy to setup. wlanectomy is the preferred method.
* vicinity can be tracked using location services, wifi, bluetooth, etc. so even devices without a gps can be geo tracked.* Location Services
* Manual turn off* WLAN
* wlanectomy* auto turn off
* scripting: networksetup -setnetworkserviceenabled Wi-Fi off* OS startup setting: None
* Bluetooth* btectomy
* auto turn off* scripting
* OS startup settings* Settings > Bluetooth > Advanced: Turn off "Open when .. detected"
* brew install blueutil; blueutil -p 0* does not respond when in the background for incoming request
* it shoud always be running and it should automatically recycle after hours.
* should support single audio interface stream data over high speed LAN for higher quality* simple tcp socket streaming to an input using the node audio lib
* have a single interface in dashboard that shows all users and screens in one spot* Face landmarks detection - Opencv with Python
* Build Real Time Face Detection With JavaScript* Deepface Lab: from iperov
* DeepFaceLive: Windows image* Paper - Face2Face - Real time Face Capture and Reenactment of RGB Videos
* Face2Face - Real time Face Capture and Reenactment of RGB Videos* period check for making sure tracks are in sync (they tend to get out of sync)
* debounce and soften transitions, otherwise jump back/forth* ability to have more than 2 tracks for environment switch (happy, leave, etc.)
* switches across collection of pairs of tracks.* add "pings" to information bucket under a bucket and rename it to "acvitibyt"
* pings, capture, etc. should all be consumable from an analytics dashboard* can track across different devices (device id is 1 parameter)
* device switch should be renamed to device actions (we had raw data before and customer feedback was negative)* show username in foreground in white or show logo (so that it looks like Zoom)
* use a background picture for this: gsettings get org.gnome.desktop.background picture-uri* npm this: pkill -f firefox;declare -x DISPLAY=":0";firefox "http://....."
* allow injection of dashboardcomposeConfig, dashboardkvsSchema variables from the environment.
npm run dashboard
* UIWebKit and like on iOS have reported this problem.
* workaround is to use: https://github.com/cordova-rtc/cordova-plugin-iosrtc* subscribe to on error event and prevent the service from crashing when address is in use
* useful for adjusting the screen when there are screen misalignments
* previous iterations would click and change the state
* The other advantage is that it can be applied to other sources besdies audioInput (streamWaveLoop, etc.)
* add tracking and cleanup functionality to webAudio
* Some browsers (i.e. iOS 16 Safari) won't call onclose when the following happens: [blocked] The page at https://xxx was not allowed to run insecure content from ws://xxx
* use //service/serviceName?method to query values on events. do not pass state around in the events
* use cancelAnimationFrame for making audiovisualizer compatible with IDE* capture issues related to streams in the audiovisualizer view
* implement connection visualizer widget* use notifySubscribers and service subscription model
* add code for detection suspended audioContext* improve Webaudio to allow parity between browser native code and non browser environment
* temporarily disable currentMicSilenceDetected
* allows audioContext to get created and be ready
* allows mix mode deployments between browser and nonBrowser envirnments
* temporary disable stopStreamWhenPowerIsBelow, clearExistingObjects functionality
* update webAudio with better logging* added documentation for externalScriptProcessorObject
* use pkgModuleString and full config names
* allows for more readability* on mobile we often disconnect. this feature will make refreshing easier
* string usage will implementing cli easier
* decouples cloud settings from audio settings
* decouples cloud settings from audio settings
* stopStreamWhenPowerIsBelow property is implemented for nonBrowserAudioInputNode and is configurable based on the volume
* onAudioPacket is implemented for socketReader to indicate silence in the monitoring panel* customer issues addressed:
* customers have reported that on slow internet connections the speaker gets too much delay* improve ui responsiveness for inputless monitoring
* security feature for when the device is non standard
* useful for when the data source has hardware bugs. For example GHW USB AUDIO would not sample at 48khz and would return a stream at 48.1khz.
* this can be utilized as a workaround for the memory leak in the workstationaudioout. customers had reported memory overnight going to 6Gig.
* overtime, this would introduce creeping delays into the speaker which overtime will go up to multiple seconds.* keepAlive useful for no source is connected but the QOS metrics need to be send. this will make the feedback independent of workstationaudioout and debugging easier.
* consoleMode allows live inspection of QOS data* add useAudioCopy mode for legacy workstationaudioout clients
* allows the user to customize which part of the display to project
* allows better performance for non-browser environments
* without this feature, the admin will still see that the user is "focused" and will assume that the audio is getting through which is incorrect. the delay will start increasing, but it would be unclear why.
* it will introduce a permanent delay to the next connection which is not captured by the admin dashboard delay (possibly due to clogging on the admin client but the reason is not entirely clear yet). This is really bad because it will ruin the confidence that user might have in the admin cli dashboard.* provides smoke test for audio filters
* ensures that service can be started from a reboot with no issues
service/virtualcamera/feedcamerartsp.sh ubuntu-xxx
npm run admincli queryObject.queryObjectId ../queryObject.xcast.xcast.json queryObject.focus
* useful for bypassing naudio interface and debugging networking performance issues
* implement filters for audioStreamBetweenNodesLogAllStreamData and audioStreamBetweenNodesGrepStr
* add buffering to make sure all chunk sizes and delays are handled properly* add magic number to header to detect corruptions
* QOSMetrics will be based on packets and sequence sequence numbers after this change* implement simulateChoppyChunks to test audio transmission over low quality networks
* allows for overlaying protocol metadata and performance metrics inside the audio stream
clear && printf '\e[3J';npm run testservice queryObject.service testmixeradminspeakerxcast@xcast,testmixeradminspeakerenduser@peer;
* formatting the logs such that they are consumable from terminal
* dataStreamMode (streammodecooked, streammoderaw): raw will be useful for highperformance relaying of streamins without inspecting the contents
* streamProtocol (streamproto0, streamproto1): allow protocol implementations and versioningover cooked streams* enableQOSMetrics: collection and sending of QOS metrics
* enables easier testing and implementation of cli interface
* helps quickly determined whether the audio is connected and working
* added icons and removed extra text
* group and hide controls and organize in layers* this will allow controlling of device volume on service startup
* support Float32 and 16BitPCM datastreams
node tools/acp.js 1 :10003 0 1
node tools/acp.js :10003 6 0 1node apps/workstationaudioout.js dev
clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer,virtualcamera@peer,workstationaudioout@peer
* this will address the blips and audio glitches
-f lavfi -i "sine=frequency=480:duration=50000000" -i inputFifoPipeName -filter_complex amix=inputs=2:duration=first,...
* the output stops when 1 of the inputs stop
* adding since wave via amerge:amerge=inputs=2
* the output stops. it will also resample to the sine frequency and cause disortion.
clear && printf '\e[3J';npm run testservice queryObject.service ajmixermixer@xcast,ajmixeradmin@peer,workstationaudioout@peer
* allows defining context specific naming for services
* utilize async and usewallclockas_timestamps
* async relies on the timestamps of audio. For audio that comes from a video stream the timestamps are present but for a raw feed which has no timestamps, ffmpeg assigns sequential timestamps irrespective of when the audio packet is received regardless of the actual time the audio samples arrive. adding usewallclockas_timestamps will force using the system clock.* see also this post for more information https://stackoverflow.com/questions/20439640/ffmpeg-audio-transcoding-using-libav-libraries
clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer,virtualcamera@peer
* pkvim does not support mouse over HTTP API. it is only supported over the ws streaming protocol.
runapp.sh hostfolder kinesis
* make sure to setup the bash environment on the target environment.
vim ~/.bashrc
export PATH=$PATH:/Applications/MySQLWorkbench.app/Contents/MacOS;export PS1='\w>';
* testxcast was updated to utilize portforwarding
clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer
clear && printf '\e[3J';npm run testservice queryObject.service kinesis@xcast,kinesis@peer
clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer
* onNewConnection will return dataBinding for socket which could be set to "manual"
6200038: implement apps/.dev daemons
clear && printf '\e[3J';npm run testxcast
* nonBrowserAnalyzerNode
* createNodeFromXCastConfig: to allow data driven audio graph creation* implement AudioNode.setName function for tracing data
* useful for preventing the screen saver from coming on
// tip: use xev for calibration and to see exactly what needs to happen
xev -geometry 600x600clear && printf '\e[3J';npm run testwebaudio queryObject.verbose.forceUpToLevel 2 queryObject.verbose.transport true queryObject.action streamInputDeviceAsWave|streamWaveLoop
* implement nonBrowserAudioBuffer (non-interleaved IEEE754 32-bit linear PCM)
* implement nonBrowserSocketNode, nonBrowserScriptProcessor, createIzySpeakerNode* add audio/lib.convertFloat32to16BitPCM to library. implement socket
npm run teststreamconversion queryObject.verbose.forceUpToLevel 2 queryObject.verbose.audioInputSample true queryObject.verbose.streamConversion true queryObject.sampleRate 16000
* storing the volume in server (not connection object) so when restarting the client for testing the no reclicking and reconfiguring is required and the user can continue where they left off.
* it will act as a deboucing so that for short sentences we dont get a loopy jump.
$IZYPYTHONROOT/python3 test.py
* this is needed for feeding the audio from the virtualcamera client and having synced audio/video
* randomize the deviceId to avoid collision
* useful for testing audio sync, video quality, etc.
* useful for testing data extractions, etc.* works as an standalone client, with no networking required
npm run clienttest queryObject.client.type virtualcamera queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE
* improve the seperation between networking and client logic layers
* improve the client programming model and queryObject schemanpm run client queryObject.address broadcastip:port queryObject.client.type virtualcamera queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE queryObject.verbose.networking true
npm run analyzewave queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE
npm run usersGUIBackend queryObject.address $IZYANIMATESOURCEADDRESS queryObject.id $IZYANIMATEUSERID queryObject.verbose.networking true
npm run wss queryObject.verbose.networking true queryObject.address ws://localhost:20110 queryObject.verbose.audioInputSample true queryObject.verbose.audioInputLogginIntervalSeconds 2 queryObject.mode streamInputDeviceAsWave|streamWaveLoop
* allows for connecting to binary streams from other applications using a fifo pipe
npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode streamStaticFile queryObject.fileName test-48.0-16.wav
npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode resizeAndStreamWaveFile queryObject.fileName test-48.0-16.wav queryObject.factor 5
npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode streamInputDeviceAsWave
* would allow streaming audio and video into the native browser without special javascript.
* move the widget to top
* improv refresh rate so that feedback is immediate.* uses audio detection to animate videos
* allows using izyProxy.ldfn('path?fn') in python
* queryObject.videoMode captures this method
* applying workspace displayConfig during recording will make adjustment easier
* autocrop will get rid of passing x-y dimnetions for cropping which can change with user position* calibration will allow normalization of feature detection and is neccessary to make animation work across different shapes and geometries
* background removal using chroma key compositing (this is simpler that Selfie Segmentation using mediapipe, but requires more work with the screen setup).npm run sleep
* failed outcome will return non zero exit code.
* useful for ipc across apps and catching errors.* useful for debugging when the server crashes
* this will remove lingering video windows from a previous crash
* useful for broadcasting applications where control and video are seperate desktops
* to test runAs npm run videotest queryObject.asUser bgnpm run testbroadcast
* debounce the change detection and define a threshold for change.
* optimize framerate by introducing delay* output averaging: remember the previous pick and allow smooth transition on updates
* improve frame speed rate
* black background* white background will generate glare in dark room
* disable password prompt and quitly stop* automatic login
* build processsrm ../camera.zip; zip -r ../camera.zip . -x .git/\ -x node_modules/\ .
servers/ubuntu_unzip -o camera.zip -d camera
* unstable client connections will not disturb the application
* allows for aggregating view on devices and other areas.
* replace legacy device management raw_tokendata management
* replaces cptools functionality* improves device context capture using 'browsercontext5' and referencing rawtokendata_id
* legacy cptools was done via the 'cptoolsuserid' cookie in cptools/pkgdashboard/admin.~/izyware/izy-idman-tools/id/<>/Desktop
to izyidman recordsizyidman/5/pkgdashboard
to seperate workspacertsp-stream-from-ip-camera-delay-in-input-in-preview: https://obsproject.com/forum/threads/rtsp-stream-from-ip-camera-delay-in-input-in-preview.112518/
ffmpeg-send-video-to-dev-video0-on-ubuntu: https://askubuntu.com/questions/881305/is-there-any-way-ffmpeg-send-video-to-dev-video0-on-ubuntuhow-to-install-the-mpeg-4-aac-decoder-and-the-h-264-decoder: https://askubuntu.com/questions/214421/how-to-install-the-mpeg-4-aac-decoder-and-the-h-264-decoder
virtual-camera-for-opencv-using-v4l2loopback: https://arcoresearchgroup.wordpress.com/2020/06/02/virtual-camera-for-opencv-using-v4l2loopback/wowza-streaming-protocols: https://www.wowza.com/blog/streaming-protocols
nvidia-container-toolkit: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.htmlcuda-installation-guide-linux: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
cuda-compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/
official tenstorflow installation: https://www.tensorflow.org/install/source_windowsPytorch not working on Python 3.10: https://discuss.pytorch.org/t/pytorch-pip-installation-not-working/134884
Deepface Lab: https://www.youtube.com/watch?v=tW7EENTWXRkDeepFaceLive: https://github.com/iperov/DeepFaceLive
Build Real Time Face Detection With JavaScript: https://www.youtube.com/watch?v=CVClHLwv-4IFace landmarks detection - Opencv with Python: https://www.youtube.com/watch?v=MrRGVOhARYY
Paper - Face2Face - Real time Face Capture and Reenactment of RGB Videos: http://www.graphics.stanford.edu/~niessner/papers/2016/1facetoface/thies2016face.pdfFace2Face - Real time Face Capture and Reenactment of RGB Videos: https://www.youtube.com/watch?v=s1DPhc9HNQ0
github: https://github.com/izyware/izy-idman-toolsnpmjs: https://www.npmjs.com/package/izy-idman-tools
vnc-rfb-client: https://github.com/filipecbmoc/vnc-rfb-clientvalence: https://github.com/simmons/valence
understanding-trrs-and-audio-jacks: https://www.cablechick.com.au/blog/understanding-trrs-and-audio-jacks/