• PRODUCT

    PRODUCT

  • PRICING
    PRICING

  • HELP
    HELP

  • BLOG
    BLOG

  • APPSTORE
    APPSTORE

  • COMPANY
    COMPANY

  • LEGAL
    LEGAL

  • LOGIN
    LOGIN

  • Workflow Automation

    Workflow Automation

  • AI Assisted Content Management System
    AI Assisted Content Management System

  • Analytics & Lead Generation
    Analytics & Lead Generation

  • Automation Projects
    Automation Projects

  • Browser Extension Apps
    Browser Extension Apps

  • Dashboard Theme Analysis: LN1
    Dashboard Theme Analysis: LN1

  • Data Exchange Automation Tools
    Data Exchange Automation Tools

  • Getting Started With Building Hybrid Apps
    Getting Started With Building Hybrid Apps

  • Izyware Hybrid UX Design Guidelines
    Izyware Hybrid UX Design Guidelines

  • Onboarding New Organizations Using Izyware
    Onboarding New Organizations Using Izyware

  • Quick Visualization and Monitoring
    Quick Visualization and Monitoring

  • Unified Metrics Stream Pipeline
    Unified Metrics Stream Pipeline

  • Legacy Features and Backward Compatibility
    Legacy Features and Backward Compatibility

  • How does the legacy frame architecture work
    How does the legacy frame architecture work

  • Izyware Legacy UI Circus Engine
    Izyware Legacy UI Circus Engine

  • Case Studies and Knowledge Center
    Case Studies and Knowledge Center

  • Angular and React Embedding Guide
    Angular and React Embedding Guide

  • Build and deploy a content distribution app in less than five minutes: Part II
    Build and deploy a content distribution app in less than five minutes: Part II

  • Comparison of CSS Preprocessors SASS vs LESS vs STYLUS
    Comparison of CSS Preprocessors SASS vs LESS vs STYLUS

  • Comparison of node.js test frameworks and utilities: lab, chai, sinon
    Comparison of node.js test frameworks and utilities: lab, chai, sinon

  • Manage and automate your day to day business tools using IzyCloud Tasks: Part I
    Manage and automate your day to day business tools using IzyCloud Tasks: Part I

  • MySql Performance Optimization
    MySql Performance Optimization

  • Onboarding Tutorial: Creating & publishing an app
    Onboarding Tutorial: Creating & publishing an app

  • Rebranding and Customizing Websites
    Rebranding and Customizing Websites

  • Using IzyCloud on Android: IzyCloud Query App
    Using IzyCloud on Android: IzyCloud Query App

  • Technical Resources
    Technical Resources

  • .NET SDKCore IzyWare
    .NET SDKCore IzyWare

  • av-stream README
    av-stream README

  • ElasticSearch IzyWare Data Console Feature
    ElasticSearch IzyWare Data Console Feature

  • End To End Testing
    End To End Testing

  • End To End Testing, Part II
    End To End Testing, Part II

  • frames and nav (ui/w/shell/navmulti) README
    frames and nav (ui/w/shell/navmulti) README

  • izy-circus README
    izy-circus README

  • izy-idman-tools README
    izy-idman-tools README

  • izy-pop3 README
    izy-pop3 README

  • izy-proxy README
    izy-proxy README

  • izy-sync README
    izy-sync README

  • IzyIDE README
    IzyIDE README

  • izymodtask readme
    izymodtask readme

  • IzyShell readme
    IzyShell readme

  • ReKey Feature Package README for IzyWare SQL Console
    ReKey Feature Package README for IzyWare SQL Console

  • Single Sign-On (SSO) README
    Single Sign-On (SSO) README

  • Tasks Migration : V5 guidelines
    Tasks Migration : V5 guidelines

  • Users & Groups README
    Users & Groups README

  • V5 Migration : apps/pulse guidelines README
    V5 Migration : apps/pulse guidelines README

  • Container Orchestration
    Container Orchestration

  • Izy Kubernetes Internal Networking Troubleshooting
    Izy Kubernetes Internal Networking Troubleshooting

  • Application Hosting
    Application Hosting

  • Content Publisher
    Content Publisher

  • Domain Registration
    Domain Registration

  • Email Hosting
    Email Hosting

  • Izyware Browser Extension
    Izyware Browser Extension

  • Izyware Deployment Engine
    Izyware Deployment Engine

  • Izyware Session Management
    Izyware Session Management

  • Messaging System APIs and functionality
    Messaging System APIs and functionality

  • Single SignOn
    Single SignOn

  • Integration APIs
    Integration APIs

  • iOS SDK
    iOS SDK

  • Azure and .NET
    Azure and .NET

  • izy-devops
    izy-devops

  • << izy-circus README
    << izy-circus README

  • izy-pop3 README >>
    izy-pop3 README >>

  • izy-idman-tools README

  • see markdown

  • izy-idman-tools

    Enterprise developer identity management solutions.

    This product includes software developed by Inferno Nettverk A/S, Norway.

    Overview and Legacy Apps

    • (legacy) taskraid (notes, etc) and new db_personidentity are utilized in the information bucket application
    • cptools
    * (legacy) db_personidentity

    * avatar

    • device manager

    * usage analytics (browsing)

    * trigger management
    • storage service
    * apps/storage/pkgdashboard:api/stream

    * cb_storagedata

    * supports generic storage solution for data

    INSTALLATION

    Dependencies
    • Node JS

    sudo apt install npm;sudo npm install pm2 -g;

    cd ~;npm install izy-idman-tools;mv node_modules/izy-idman-tools .;

    cd ~/izy-idman-tools;

    npm install;

    • Docker
    • Python: Please note that if you are planning on using the video machine learning with GPU enabled you should use v3.80 and other versions such as Python 3.10 will not work. This is the reason Pytorch not working on Python 3.10.

    Ubuntu

    export IZYPYTHONROOT=/usr/bin

    sudo apt-get -y install cmake python3-pip vim

    Ubuntu: only if using Python audio tools

    sudo apt-get -y install portaudio19-dev

    MacOS

    make sure to use rosetta for apple m1

    use homebrew to install python3+, $IZYPYTHONROOT=/usr/local/bin

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

    brew install python@3

    brew install portaudio

    Core Video

    $IZYPYTHONROOT/pip3 install cmake

    $IZYPYTHONROOT/pip3 install dlib numpy opencv-python -vvv

    / See the instructions under the GPU section for installing Python tensorflow bindings /

    $IZYPYTHONROOT/pip3 install matplotlib pyyaml tqdm imageio scikit-image scikit-learn pandas imageio-ffmpeg

    Audio

    $IZYPYTHONROOT/pip3 install pyaudio numba librosa -vvv

    If using mediapipe for background removal

    $IZYPYTHONROOT/pip3 uninstall tensorflow protobuf

    $IZYPYTHONROOT/pip3 install tensorflow protobuf

    $IZYPYTHONROOT/pip3 install mediapipe -vvv

    GPU Compatibility

    We recommend using NVIDIA GPUs. Access to the GPU is enable by the CUDA library. The good news is as far as picking the cuda version, start with 11+, NVIDIA provides backwards compatibility. See cuda-compatibility. You should keep this in mind when keeping a CUDA version for installation of TensorFlow.
    Windows
    For windows machines, the drivers are already installed out of the box and you can use the following command to check the version info:

    nvidia-smi

    / You should see something like below /

    / dell XPS /

    Driver Version: 472.19 and CUDA Version: 11.4

    / ASUS /

    Driver Version: 510.47.03 CUDA Version: 11.6

    Linux Docker Mode

    Follow the steps below (this is from nvidia-container-toolkit):

    • If not already make sure the driver is installed. As part of the process, note that when faced with the options:

    [does not work] sudo apt-get install -y nvidia-open

    [works] sudo apt-get install -y cuda-drivers
    • Be sure to store the kernel and nvidia driver version in your home directory ~/nvidia_setup.txt for future reference in case the driver breaks due to auto updates, etc. See below:

    modinfo nvidia >> ~/nvidia_setup.txt

    filename: /lib/modules/5.15.0-117-generic/updates/dkms/nvidia.ko

    alias: char-major-195-*

    version: 560.28.03

    find /lib/modules/$(uname -r) | grep nvidia >> ~/nvidia_setup.txt

    /lib/modules/5.15.0-117-generic/kernel/drivers/net/ethernet/nvidia

    /lib/modules/5.15.0-117-generic/kernel/drivers/net/ethernet/nvidia/forcedeth.ko

    /lib/modules/5.15.0-117-generic/kernel/drivers/i2c/busses/i2c-nvidia-gpu.ko

    /lib/modules/5.15.0-117-generic/kernel/drivers/video/fbdev/nvidia

    /lib/modules/5.15.0-117-generic/kernel/drivers/video/fbdev/nvidia/nvidiafb.ko

    /lib/modules/5.15.0-117-generic/kernel/drivers/usb/typec/altmodes/typec_nvidia.ko

    /lib/modules/5.15.0-117-generic/kernel/drivers/platform/x86/nvidia-wmi-ec-backlight.ko

    /lib/modules/5.15.0-117-generic/updates/dkms/nvidia.ko

    /lib/modules/5.15.0-117-generic/updates/dkms/nvidia-peermem.ko

    /lib/modules/5.15.0-117-generic/updates/dkms/nvidia-uvm.ko

    /lib/modules/5.15.0-117-generic/updates/dkms/nvidia-drm.ko

    /lib/modules/5.15.0-117-generic/updates/dkms/nvidia-modeset.ko

    sudo nvidia-smi >> ~/nvidia_setup.txt

    +-----------------------------------------------------------------------------------------+

    | NVIDIA-SMI 560.28.03 Driver Version: 560.28.03 CUDA Version: 12.6

    • cuda-installation-guide-linux: Follow the post installation steps and build the cuda-samples to make sure everthing is working

    export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}

    ~/cuda-samples/Samples/1_Utilities/deviceQuery/deviceQuery

    • nvidia-container-toolkit: Install docker and container toolkit. make sure docker cli is accessible from the current user (reboot required)

    sudo groupadd docker;sudo usermod -aG docker $USER;sudo reboot;

    docker run hello-world

    Finally, make sure all pieces fit together, i.e.

    docker run --gpus all nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu20.04 nvidia-smi

    If you are running into issues after a successful installation,

    nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.

    sudo nvidia-smi

    NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

    Make sure the driver is installed for your kernel:

    find /lib/modules -iname nvidia

    find /lib/modules/$(uname -r) -iname nvidia

    dpkg -l | grep nvidia

    modinfo nvidia-520

    If they are present but not loaded this is most likely caused by a kernel or bios auto update:

    Kernel auto update?

    apt list --installed | egrep '^linux' | grep $(uname -r)

    mkdir tmp; cp /var/log/apt/.gz tmp/;cd tmp;gunzip ;grep -B 4 -A 4 -nH $(uname -r) *;

    Linux Native Mode
    For Ubuntu 20, the drivers will not be installed by default but Nvidia offers step by step guide for installation here: nvidia-driver-installation-guide

    so follow these steps:

    ubuntu-drivers devices

    / You should get the following /

    / dell XPS /

    == /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==

    modalias : pci:xxxxxxxx

    vendor : NVIDIA Corporation

    driver : nvidia-driver-470-server - distro non-free

    driver : nvidia-driver-510 - distro non-free recommended

    driver : xserver-xorg-video-nouveau - distro free builtin

    / Asus (NVIDIA GeForce RTX 3060 Laptop GPU) /

    == /sys/devices/pci0000:00/0000:00:06.0/0000:01:00.0 ==

    modalias : pci:v000010DEd00002520sv00001043sd000016E2bc03sc00i00

    vendor : NVIDIA Corporation

    driver : nvidia-driver-510 - distro non-free recommended

    driver : nvidia-driver-470 - distro non-free

    driver : nvidia-driver-470-server - distro non-free

    driver : xserver-xorg-video-nouveau - distro free builtin

    Afterwards, you can use the official tenstorflow installation link and pick a CUDA version that is compatible wity your hardware, below are some examples:

    sudo apt install nvidia-cuda-toolkit

    nvcc --version

    / dell ubuntu /

    pip3 install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

    / dell windows Driver Version: 461.92 CUDA Version: 11.2 /

    / Menu Pick: LTS (1.8.2), CUDA 11.1 /

    install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

    / Lenovo: Driver Version: 472.19 CUDA Version: 11.4 /

    pip install torch==1.10.2+cu102 torchvision==0.11.3+cu102 torchaudio===0.10.2+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html

    To make sure that PyTorch instllation is CUDA enabled, try running the following Python script:

    / check CUDA compatibility by /

    import torch

    torch.cuda.current_device()

    torch.cuda.getdevicename(0)

    If you are using NVIDIA GeForce RTX 3060, you may see this message:

    NVIDIA GeForce RTX 3060 Laptop GPU with CUDA capability sm86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm37 sm50 sm60 sm61 sm70 sm75 compute37. If you want to use the NVIDIA GeForce RTX 3060 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

    Please refer to this link for further instructions: https://forums.developer.nvidia.com/t/rtx-3060-can-support-cuda-with-pytorch/178078/2

    Utilizing the GPU will can improve the performance by ten folds, as an example:

    with CPU: 210 (226s) 2.8 it/s

    with GPU: 19 (29s) 31 it/s

    Enalbing GPU for 3rd party apps and Chrome
    Make sure OpenGL and Vulkan are enabled:

    vainfo

    sudo apt install mesa-utils vulkan-utils

    sudo prime-select nvidia

    sudo reboot

    export DISPLAY=:0;

    glxinfo | grep "OpenGL"

    vainfo
    troubleshooting
    If you run into booting issues after installing the Nvidia driver on AMD systems:

    NVRM cpuidInfoAMD: Unrecognized AMD processor in cpuidInfoAMD

    boot into recovery mode by pressing ESC, drop into shell and then

    sudo apt-get purge nvidia-*

    Make sure your system and related binaries are up to date:

    sudo apt install --reinstall linux-image-generic

    sudo apt install --reinstall linux-headers-generic

    sudo apt remove --purge '^nvidia-.';sudo apt remove --purge '^libnvidia-.';

    sudo rm /etc/X11/xorg.conf | true;sudo rm /etc/X11/xorg.conf.d/90-nvidia-primary.conf | true;sudo rm /usr/share/X11/xorg.conf.d/10-nvidia.conf | true;sudo rm /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf | true;

    sudo rm /etc/modprobe.d/nvidia-kms.conf | true;sudo rm /lib/modprobe.d/nvidia-kms.conf | true;

    sudo apt update -y && sudo apt full-upgrade -y && sudo apt autoremove -y && sudo apt clean -y && sudo apt autoclean -y

    Install the driver

    sudo ubuntu-drivers autoinstall

    check which driver is installed by

    dkms status

    nvidia, 520.56.06, 5.15.0-52-generic, x86_64: installed

    / to get the path and other infor related to the device /

    modinfo nvidia

    sudo modprobe nvidia

    modprobe: ERROR: could not insert 'nvidia': No such device

    sudo nvidia-smi

    NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

    / if not installed /

    sudo dkms remove nvidia/(version number)

    sudo dkms install nvidia/(version number)
    Audio Devices
    Most devices come equipped with a 3.5mm audio jack and they could be TS, TRS and TRRS (T = Tip, R = Ring, S = Sleeve). The most common case is a TRRS jack. There are two common arrangements for a four conductor point jack which are not compatible. See understanding-trrs-and-audio-jacks
    • CTIA or American Headset Jack (AHJ): has the microphone connector point on the sleeve end, with the signalling (or ground) connector on the second ring. This arrangement is used by most newer smartphones. Apple uses a similar connector arrangement, but uses a non-standard microphone and control signalling method.
    • OMTP standard: It has the signalling connector (ground) next to the sleeve, with the microphone connector on the second ring. This is the reverse of the AHJ arrangement. It is used by older smartphones.

    CTIA is the most commonly used. Note that some manufacturers call the TRRS jack an audio combo jack (ASUS).

    To seperate the signals the device is marketed as "Headset Splitter Adapter" or "Headset Adapter Y Splitter" or "Audio Splitter".

    When the devices arent equipped with audio jacks you can utilize an external sound card

    • USB to 3.5mm Jack Audio Adapter with 3.5mm TRS Headphone and Microphone Jack (Female)
    User Virtual Shell Environment
    • run the following

    docker build --rm -t izyidman .

    • update the paths

    * make the scripts available in your cli environment:

    / put this in your shell ~/.zshrc ~/.bash_rc /

    You can add the tool to your PATH environment variable, i.e.

    export PS1="%d % : "

    export PATH=$PATH:~/izy-idman-tools;

    * add ~/izyware/izy-idman-tools to Docker -> Preferences... -> File Sharing.

    SideBySide Sessions

    This will allow you to run side by side Firefox session and run your browser based applications (vscode in the tunnel mode), etc.

    /Applications/Firefox.app/Contents/MacOS/firefox --profile /var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.9 &

    /Applications/Firefox.app/Contents/MacOS/firefox --profile /var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.17 &

    Note that chrome will open in the same session even if the profile-directory is specified

    export CHROME_PATH=/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome

    $CHROME_PATH --profile-directory=/var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.dzm4bVTL

    $CHROME_PATH --user-data-dir=/var/folders/6n/pvqhgtkj3r5fmsqlsnbv93m80000gp/T/tmp.dzm4bVTL

    run id to get userid

    id

    sudo launchctl asuser 502 open $CHROME_PATH

    A better alternative is to use XQuartz. First create a new xterm:

    xterm -fa Monaco -fs 14 -sb -sl 1000 -fg blue -bg white

    Then fix the cut/paste on macOs by referring to https://superuser.com/questions/517878/why-cant-i-paste-into-xterm-xquartz. Selecting text in xterm, and hitting ⌘+C to copy, should work, if you have turned on "Enable key equivalents under X11" under the "Input" tab, and "Update Pasteboard when CLIPBOARD changes" under the "Pasteboard" tab. For Paste:

    To fix Cmd+V on Mac create ~/.Xdefaults and add the following ()

    *VT100.translations:

    override Meta V: insert-selection(PRIMARY, CUT_BUFFER0) \n

    The launch and run applications:

    ~/plat/p/apps/aws/remoteaccess/runx.sh . "~/openchrome.sh"

    The Izy Shell Tool

    You can use the shell inside a docker container to allow multi tasking across different user profiles.

    Running the tool

    `

    izyshell.sh [hostport:guestport]

    izyaws.sh s3 ls --profile eugene

    `

    the following directories will be available inside the shell:

    /izycommondir -> $IZYSHELL_COMMONDIR

    /izyhostdir -> $(pwd)

    Capturing and Consuming user profiles

    Follow these steps

    [shadow admin, ofc session]

    data-collection/app/device-manager/list: find the idtoken for the profile

    from the legacy izyware, kick off a taskrunner item with the following parameters

    runtimeid=idtoken

    parameters=statecapture5.5

    this can take upto 15 minutes.

    [workstation]

    run a socks proxy server on 1080

    docker rm socks5; docker run -d --name socks5 -p 1080:1080 serjs/go-socks5-proxy

    find the ip address for the workstation

    ifconfig | grep 10.42

    [host]

    launch chrome

    izy.devops "ssh?runx" . "~/openchrome.sh"

    [host chrome]

    Login as shadow admin (make sure it is in the same tab -- or automation wont work)

    install the toolbar

    if bucket already assigned, then go to data-collection/app/information-bucket, find the profile > device switch > click > ok.

    if bucket not already assigned, go to data-collection/app/device-manager/list: find the profile

    if not already, set bucketId from UI (If not bucket is assigned, then goto payloads tab find the payload, click on row > popup > click switch)

    Selenium Version

    IzyShell may also be used to drive selenium driven tasks. To setup Selenium on Ubuntu

    install chrome

    wget https://dl.google.com/linux/direct/google-chrome-stablecurrentamd64.deb

    sudo dpkg -i google-chrome-stablecurrentamd64.deb

    sudo apt-get install -f

    google-chrome --version

    get the version that matches

    google-chrome --version

    wget https://storage.googleapis.com/chrome-for-testing-public/128.0.6613.84/linux64/chromedriver-linux64.zip

    unzip chromedriver-linux64.zip;

    rm chromedriver-linux64.zip;

    install it in usr/bin so that it is in the PATH

    sudo mv chromedriver-linux64/chromedriver /usr/bin/chromedriver;sudo chown root:root /usr/bin/chromedriver;sudo chmod +x /usr/bin/chromedriver;rm -rf chromedriver-linux64;

    Test It

    chromedriver --url-base=/wd/hub

    Working around networking issues

    Disabling Wired Network

    Use following

    1. get the name from the following command

    networksetup -listallnetworkservices

    2. disable the device

    networksetup -setnetworkserviceenabled "Name from Above" off

    3. enable the device

    networksetup -setnetworkserviceenabled "Name from Above" on

    Socks Service

    We recommend using the standard openSSH stack. The basic setup will consist of an internet accessible service (IAS) with TCP://$IZYSOCKSSERVERIP:$IZYSOCKSSERVERPORT open to public.

    The Socks clients can then either initiate the connections from the server or initiate the connections from a thirdparty box (3B) in your network (hence the name double dipping or DD). In both scenarios, the client will connect to socks5://$IZYSOCKSSERVERIP:$IZYSOCKSSERVERPORT

    For the DD scenario to work, you will need to:

    • on 3B use SSH to setup a socks service
    • on 3B use SSH to setup port forwarding from the local socks service (3B) to IAS on the $IZYTCPACCESSPORT.
    IAS Setup and Issues
    On the IAS, when you forward a TCP port (either locally or remotely), by default SSH only listens for connections to the forwarded port on the loopback address (localhost, 127.0.0.1). This means only other programs running on the same host as the listening side of the forwarding can connect to the forwarded port. This is a security feature, since there is no authentication applied to such connections. To override this behavior GatewayPorts to yes:

    vim /etc/ssh/sshd_config

    set GatewayPorts to yes

    sudo systemctl restart ssh.service

    Also sometimes on the IAS, there will be dangling sshds attempting to processing frowarding requests on $IZYSOCKSSERVERPORT. Make sure they are cleared:

    ssh -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP

    sudo lsof -i -P -n | grep $IZYTCPACCESSPORT

    and find all of them and kill them.

    On the IAS make sure that the binding address for the port is 0.0.0.0 (not 127.0.0.1)

    netstat -tupln | grep $IZYTCPACCESSPORT

    0.0.0.0:19999
    3B Setup and Issues
    For the 3B, setting up local socks server is very easy. Most unix and max systems come equipped with SSH. For example to setup local socks server on Mac:

    System Preferences > Sharing > Turn remote logging on

    / Enable Socks Server on Local /

    ssh -v -D $IZYTCPACCESSPORT -C -N $USER@localhost

    Then "publish" the local port on the remote server:

    / full background /

    ssh -v -N -f -R $IZYTCPACCESSPORT:localhost:$IZYTCPACCESSPORT -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP

    / Interactive console /

    ssh -v -g -R $IZYTCPACCESSPORT:localhost:$IZYTCPACCESSPORT -i $IZYSOCKSSERVERPEMFILE $IZYSOCKSSERVERSSHUSER@$IZYSOCKSSERVERIP

    Access External VNC to Client Machine by Having the VNC client on the Ubuntu box

    This is the preferred method because it will allow accessing and monitoring the VNC session from the xcast app web interface.

    The solution is to use the X virtual framebuffer.

    Xvfb :1

    Xvfb :1 &

    x11vnc -display :1 &

    to get the client ip, use arp-scan. See the internet sharing section

    export DISPLAY=:1;vncviewer client_ip:5900 &

    Access External VNC to Client Machine through Ubuntu Host Port Publishing

    Create ~/vncpublish.sh script on the Ubuntu hosting for forwarding the VNC port

    rm fifo0

    rm fifo1

    mkfifo fifo0 fifo1

    nc -l UBUNTUUSERIP 10000 > fifo0 < fifo1 &

    nc CLIENTMACHINEIP 5900 < fifo0 > fifo1 &

    Then use the vnc script

    cli/remoteaccess/vnc.sh .

    VNC is based on the standard RFB protocol and generally speaking most VNC client should work. Notice that MacOS implements Apple Remote Desktop (ARD) based on VNC and using version "3.889" of the RFB protocol, which is non-standard. See https://www.tenable.com/blog/detecting-macos-high-sierra-root-account-without-authentication and https://cafbit.com/post/appleremotedesktop_quirks/ for more info.

    The following tools can be used

    • MacOS - VNC Viewer
    • Ubuntu Remmina: Use VNC connection type, not RDP. Alt-Tab wont work.

    remmina --server=ip --protocol=vnc

    • Ubuntu RealVNC client:

    vncviewer ip:5900 &

    • NodeJS open source implementation vnc-rfb-client (does not support ARD)
    • Java open source implementation valence

    Environment variables to configure the CLI

    Environment variables provide another way to specify configuration options and credentials, and can be useful for automation.

    Precedence of options

    Using a parameter on the CLI command line will override any value from the corresponding environment variable.

    Supported environment variables

    • IZYSHELL_COMMONDIR: Host path mapping to shell /izycommondir
    • IZYDEFAULTUSER: Specifies an ID associated with a user or group.
    • IZYAUTHORIZATIONTOKEN: Specifies the session key for the cloud access APIs.

    Clipboard Access From the Docker Environment

    MacOS

    You can use a single pipe to push stdout result into the clipboard:

    izyaws.sh eugene whoami > pbcopy

    To pick up content from files, etc.

    pbcopy < izyshell.sh eugene

    To grab contents from clipboard:

    pbpaste > mydata

    Remote Access to Mac
    Mac os ships with built-in vnc and ssh.

    For file sharing, Apple menu > System Preferences, then click Sharing and then use "Share files and folders using SMB.". This will allow collaboration with Windows and Mac users.

    Windows

    replace pbcopy with clip.
    Remote Access to Windows Boxes
    Windows does not ship with a built-in vnc server. You can ues tightVNC as it works standard VNC clients. Also note that for newwer versions of windows, the PIN is not the same as the password. In order to access the current user and set it from the command line use:

    net user

    To copy files across Mac and Windows, you can share a folder from windows, and then mount the folder using

    Finder > Go > Connect to server

    type smb://windows_ip/folder

    Windows 10+ ships with SSH, to enable it use

    Settings > Apps > Apps and features > Optional features > add Feature

    run powershell as administrator

    Get-WindowsCapability -Online | ? Name -like 'OpenSSH.Server*'

    Get-Service -Name ssh

    Start-Service sshd

    Set-Service -Name sshd -StartupType 'Automatic'

    Start-Service ‘ssh-agent’

    Set-Service -Name ‘ssh-agent’ -StartupType 'Automatic'

    Linux

    replace pbcopy with xclip.

    IzyVatar Management

    Initial Setup/Restart checklist

    • Disable Notifications

    * Amber Alerts

    * Galaxy S9: Settings> Connections > More Connection Settings > Wireless Emergency Alerts > Settings > Alert Types
    • LP
    * Make sure it is moved to the application folder. This is needed in order to automate from the commandline.

    * Activate

    * Create

    * Meet In

    * Meet Out

    * Phone Output

    • AHJ
    * Make sure it is moved to the application folder. This is needed in order to automate from the commandline.

    * Activate

    * Import profile. Or optionally create a new profile.

    * Phone Output > AUAudioFile Player > 10 Band HQ (1X max, progression towards base - otherwise it will become the telephone effect), AUNewP (-233,8,ON), AUMultiBComp () > Volume (79, 2X) > PeakRMS > Meet In

    * Profile 2

    * Branch Out Meet Out with Low Volume for Smoke Tests

    Multi Band Compression

    10 Band HQ

    • System

    * Install latest nodeJs and izy-idman-tools

    * transfer the AI libraries

    mkdir -p ~/izyware/izy-idman-tools/id;

    rsync ....

    open ~/izyware/izy-idman-tools/id;

    / switch finder to columns view, create alias for izyware and mode alias to Desktop /

    mkdir -p ~/izyware/izy-idman-tools/ai;

    git clone https://github.com/davisking/dlib-models.git ~/izyware/izy-idman-tools/ai/dlib-models;

    bunzip2 ~/izyware/izy-idman-tools/ai/dlib-models/shapepredictor68facelandmarks.dat.bz2

    * Make sure power settings are setup correctly post restart or crash

    sudo systemsetup -setcomputersleep Never

    * System Preferences > Sounds

    * Check "Show volume in menu bar"

    * Set the input volume for internal Microphone to maximum

    * System Sound Default Output > Meet Out

    * System Sound Default Input > Meet In

    * Make sure the screen font size and object sizes are large enough that can be user for side by side vieweing and interacting.

    * Use data collection features to setup session and messaging.

    • SKP
    Signing using work account

    * Privacy > Only allow calls from contacts

    * AV > Mic > Meet Out

    * AV > Automatically Adjust Micophone Settings (Off)

    * AV > Noise Cancellation (Off)

    * AV > Automatically Adjust Settings (OFF)

    * AV > Speakers > Phone Output

    * Calling > Advanced > Answer Incoming Calls Automatically

    * Calling > Show call window when in background (yes)
    • OBS
    * Install OBS WebSockets

    * Pick "Only using Virtual Camera"

    * Default canvas resolution will be set to your screen resolution (1920x1080). Reduce that to 720p

    * Image customization, use the following sequence of commands

    find / -iname placeholder.png 2> null | grep png | read PLACEHOLDERIMAGEPATH; dirname $PLACEHOLDERIMAGEPATH | read PLACEHOLDERIMAGEPATH;echo copying to $PLACEHOLDERIMAGEPATH;sudo cp placeholder.png $PLACEHOLDERIMAGEPATH;ls -l $PLACEHOLDERIMAGEPATH;

    * profile basic01

    * Scene configuration

    * Scene > Sources > izyware/izy-idman-tools/id/...

    * Scene 2 > Sources > izyware/izy-idman-tools/id/...

    * For scene transitions use Fade.400ms

    * Source configuration:

    * Loop option is enabled

    * All the other options are disabled. In particular restart should be disabled, otherwise the switching will not work well.

    • ZM

    * Always signed in

    * Bookmark the personal room link

    * Always have chat window open

    * Settings > Share Screen > Window Size when sharing: Maintain current size

    * Settings > Share Screen > When I Share > ... : Show all options

    * Settings > Mic > Meet In

    * Settings > Speaker > Meet Out

    * Background

    * izyware/izy-idman-tools/id/...

    • Workstation
    * The scaffoling for devices and basic user should be seperate for movement

    * Use more modular pieces and replace long components with smaller attachable ones.

    * More space required in the backend for screen

    * Top bar and cross bar are required for stability

    * camera setup

    * external cameras are preffered to phone camera

    * the camera angle is already setup so it will not introduce stress into the system and no dismantling.

    * computer internal webcam can not be easily tiled, mounted, or their distance to target adjusted.

    * the final angels and settings can be tested on an exetrnal subject before deployment

    • Ubuntu: follow these steps

    * Setup a basic access point with blank screen

    / disable UEFI Secure Boot in the BIOS -- This can cause problems in installing NVIDIA drivers /

    / find all the computers on network /

    / the netgear web portal will show the registered machines with their ip /

    http://192.168.1.1/ (admin,password)

    / make sure SSH is enabled /

    / If you get lock errors due to update process, reboot and retry /

    sudo apt-get install openssh-server

    sudo systemctl enable ssh

    sudo systemctl start ssh

    / if you dont have a test user with sudo privilage add it /

    useradd test

    usermod -aG sudo test

    /*

    refer to izy.devops section for create key-pair for easy sshing into the box

    and then ssh into the box

    */

    izy.devops "ssh?shell" .

    / ssh into the box /

    / install tools /

    sudo apt -y install xvfb x11vnc xdotool xterm arp-scan net-tools vim tigervnc-viewer imagemagick nodejs npm ffmpeg curl;

    / turn off all auto updates. Auto updates can cause issues with custom drivers such as NVIDIA /

    sudo vim /etc/apt/apt.conf.d/20auto-upgrades

    / set the machine name AS SEEN BY THE CLIENT WORKSTATION pick a name consistent with the reported ISP-id, reboot required /

    sudo vim /etc/hostname

    / some NVIDIA hardware may cause boot issues, so disable splash screen /

    sudo vim /etc/default/grub

    GRUBCMDLINELINUX_DEFAULT=""

    sudo update-grub

    / connect to gnome desktop and vnc to host:5900 /

    / NOTE: during the installation of Ubuntu, pick the option where the user is automatically logged-in or this wont work /

    export DISPLAY=:0;x11vnc;

    / enable internet connection sharing (ICS) /

    If using wired:

    Network > Wires (Select the interface you would like to share) > Settings (Cogwheel) > IPV4 > Share to other computers

    If using Wireless Hotspot. uses the network manager service from GNome Desktop:

    sudo vim /etc/NetworkManager/system-connections/Hotspot

    nmcli -p -f general,wifi-properties device show wlan0

    / block service ports (ssh, etc.) from being accessed /

    sudo ufw deny from 10.42.0.0/24 to any

    sudo ufw allow from 192.168.1.0/24 to any

    sudo ufw show added

    sudo ufw enable;sudo ufw status numbered

    / if you need to allow only a single port, make sure to add to the top /

    sudo ufw insert 1 allow proto tcp from 10.42.0.0/24 to 10.42.0.1 port 8082

    / to get the client ip - grab enXXXXXXXX from ifconfig /

    sudo arp-scan --localnet -I enXXXXXXXX

    If you need to have a VPN, start it

    echo test | sudo -S openvpn --config /home/test/vpn/client.ovpn --auth-user-pass /home/test/vpn/pass.txt &

    disable default gui boot and gdm - this is required for kinesis

    to restore sudo systemctl set-default graphical will do the reverse)

    sudo systemctl set-default multi-user

    sudo service gdm stop

    * start the kinesis app in dev mode

    clear && printf '\e[3J';~/plat/p/izyidman/5/cli/remoteaccess/runapp.sh ~/plat/p/servers/ kinesis;

    mkdir -p ~/beta;rsync -av --exclude="node_modules/*" ~/plat/p/izyidman/5/ ~/beta;cp ~/beta/queryObject..xcast.json ~/beta/queryObject.xcast.json;

    cd ~/beta/xcast;pm2 start apps/kinesis.js

    remember that only on the first run, Kinesis may show a certificate error. Manually dismiss that and it should work fine next you run it.

    Once you have started all desired apps, save the app list

    pm2 save

    To manually bring back previously saved processes (via pm2 save):

    pm2 resurrect;

    * enable audio: when the gui mode gets turned off, you may have trouble accessing audio as a non-root user. The issue could be related to pulseaudio server not allowing access to your user. Typically, pulseaudio server runs as the main admin (non root). Every user that belongs to the audio group is able to access the pulseaudio server.

    1. make sure that pulseaudio is running

    sudo ps aux | grep pulse

    2. add the user to the audio group by either editing /etc/group or the following

    sudo usermod -aG audio test

    sudo reboot

    To pick a device for your service use the following. Make sure to "not" use the pulse devices as they tend to introduce delays and performance issue. Instead use the ALSA devices since they are faster.

    npm run getaudiodevices

    install alsautils

    sudo apt-get install alsa-utils

    list sound cards

    aplay -l

    use this for a nice graphical user interface from the terminal

    alsamixer

    If the input device does not switch from build-in mic to headphone jack follow these steps

    sudo alsactl restore

    try plugging in a headset device that will switch and allows verifying that switch. after switch re-plugin the cable

    3. Find card the card controls and their values (volume, etc.)

    cat /proc/asound/cards

    amixer -c scontrols

    amixer -c scontents

    amixer -c sget '''

    4. Adjust the controls (volume, etc.)

    amixer -c 0 set '' mute

    amixer -c 2 set 'Headphone' 87%

    amixer -c 2 set 'Headphone' 90%

    If you need to set the default sound card

    create /etc/asound.conf with following indices from /proc/asound/cards

    defaults.pcm.card 1

    defaults.ctl.card 1

    test speaker

    aplay -L

    AUDIO is the "CARD" name. Notice that it is case sensitive AUDIO<>Audio

    speaker-test -D sysdefault:AUDIO

    aplay ~/beta/xcast/data/test-48.0-16-mono.wav

    while [ 1 ] ; do aplay ~/beta/xcast/data/test-48.0-16-mono.wav;done

    test microphone

    sudo arecord -l

    sudo arecord -f S16_LE -d 10 -r 16000 --device="hw:3,0" /tmp/test-mic.wav

    * enable virtualcamera service

    make sure that the grahics card is configured correctly

    for example xrandr might generate: Failed to get size of gamma for output default when trying to add new screen resolution

    sudo vim /etc/default/grub

    uncomment GRUB_GFXMODE and set to default xrandr 1920x1080

    sudo update-grub;

    sudo reboot;

    install v4l2loopback and make sure the user is part of the video group

    sudo apt-get install v4l2loopback-dkms v4l2loopback-utils;sudo usermod -aG video $USER;sudo reboot;

    add support for MPEG4 encoding

    sudo apt-get install ubuntu-restricted-extras;

    create two devices, /dev/video8-9 and update the config

    sudo rmmod v4l2loopback;sudo modprobe v4l2loopback devices=2 videonr=8,9 exclusivecaps=1,1 card_label="vcam8,vcam9";

    sudo v4l2-ctl --list-devices;

    sudo v4l2-ctl -d8 --list-formats-ext

    start the source

    ./service/virtualcamera/feedcamerartsp.sh ubuntu-xxx

    * (Optional) Steps for further optimization

    / enable auto login -- requires a reboot /

    sudo vim /etc/gdm3/custom.conf

    xinput list / grab the ID for touchscreen buggy Atmel Atmel maXTouch Digitizer /

    xinput --disable

    / contrast /

    xgamma -gamma 1.500

    / brightness and resolution /

    xrandr | grep " connected" | cut -f1 -d " "

    xrandr --output STRINGFROM_ABOVE --brightness 1.1

    xrandr --output STRINGFROM_ABOVE --mode 1360x768

    / after some inactivity display brightness drops and we see the lines /

    to send keys and mouse to active app:

    xdotool type "Hello world"

    ; disable auto updates: https://linuxconfig.org/disable-automatic-updates-on-ubuntu-20-04-focal-fossa-linux

    ; TIP: list these by gsettings list-recursively | grep -i "suspend\|sleep"

    ; set desktop background to gray (note black will reflect off of screen)

    gsettings set org.gnome.desktop.background picture-options 'none'

    gsettings set org.gnome.desktop.background primary-color '

    888888'

    ; at 5 it turns off the screen.

    gsettings set org.gnome.desktop.lockdown disable-lock-screen 'true'

    gsettings set org.gnome.desktop.screensaver lock-enabled false

    ; wakelock issues at 4 minutes the screen resolution gets shitty

    ; disable automatic brightness

    gsettings set org.gnome.settings-daemon.plugins.power ambient-enabled false

    ; Power Saving > Dim Screen When Inactive (Disable)

    gsettings set org.gnome.settings-daemon.plugins.power idle-dim false

    ; Power Saving > Blank Screen (Never)

    gsettings set org.gnome.desktop.session idle-delay 0

    ; Suspend & Power Button

    ; Disable Notification: "Computer will suspend very soon because of inactivity"

    gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'nothing'

    gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'nothing'

    • control computer
    * phone input: computer microphone.
    • external webcams
    * disable microphone
    • samson microphone
    * lock the mic level to zero by taping it.
    • workcomputer
    * surgery

    * internal microphones and speakers.

    * wireless antenna

    * cover internal cam

    * cover all display with black, otherwise it will cause glare on the tablet and its camera will it up from the tablet

    * Settings

    * displays

    * brightness=minimal

    * automatically adjust brightness: No

    * True Tone: No

    * reolution: scaled, larger text

    * Battery

    * Power Adapter

    * Turn off display after = never

    * Prevent your Mac from automatically sleeping when display is off

    Media Production Notes

    • Geometry

    * Use 720p frame size.

    * laptop: head to frame ratio of 1/4-3/16 Width, 1/2 - 1/3 is the standard for laptop shooting.

    * external: head to frame ratio of 1/8 Width, 1/4-1/5 height is the standard for laptop shooting.

    • Anomation

    * movement generation

    * mechanical devices (chair) create smoother animations and more predicable picture quality.

    * AI mesh detection current detects the following human gestures.

    * Hello

    * GoodBye

    * Smile

    * Thinking

    * Scratch Face, Head

    * Look down

    * Look away

    * Get up and leave

    * Come in and Sit

    * Raise Hand

    * Waive to people

    * Look at camera

    * Point at camera

    * Hand Gestures

    Smoke Tests

    • Audio: Dial into user, then on device play the AHJ audio

    * ZM > Settings > Test Audio: Should hit middle green bars

    • Video: Join an empty ZM session on Device

    * Turning off VC should show transparency (no OBS)

    • Full: Join empty ZM session on Device and play the AHJ audio

    * The sync loop should respond

    * The audio and images should fully work.

    Testing

    To test a fresh deployment, use:

    npm run testvideo

    npm run testvideo queryObject.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE queryObject.inputVideoCapturePath $IZYANIMATESOURCEFRONTDEVICEID queryObject.videoGenerationMode test|direct|fomm|mixer1

    workstationaudioout@peer is not available yet

    clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,audiooutput@peer,jsonio@peer,virtualcamera@peer

    Creating Audio/Video Pipelines

    There are several recommended tools available for this task:

    • ffmpeg/ffplay
    • vlc
    • gstreamer: to consume mpeg4 content refer to how-to-install-the-mpeg-4-aac-decoder-and-the-h-264-decoder:

    sudo apt-get install ubuntu-restricted-extras;

    sudo apt-get install libavcodec58 ffmpeg;

    The oldest protocol for streaming is RTSP and RTMP. They have a few seconds delay and are not web friendly. The quickest protocol is WebRTC. For an in depth analysis refer to wowza-streaming-protocols.

    RTSP

    You should have the server setup (gstreamers implementation is not ready to be used out of the box)

    docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 -p 1935:1935 -p 8888:8888 aler9/rtsp-simple-server

    export RTSPURL=rtsp://localhost:8554/mystream

    To publish content to the RTSP service

    / on linux /

    sudo ffmpeg -f x11grab -framerate 15 -video_size 640x480 -i :0.0 -f rtsp $RTSPURL

    / macos capture camera /

    ffmpeg -f avfoundation -framerate 15 -video_size 640x480 -i 1:0.0 -f rtsp rtsp://192.168.1.12:8554/mystream

    To consume the content using vlc:

    vlc $RTSPURL

    for gstreamer, to debug and inspect that everything is working use playbin:

    / make sure mpeg4 and dependencies are installed /

    gst-launch-1.0 -v playbin uri=$RTSPURL uridecodebin0::source::latency=300

    To consume the content:

    / to show on desktop /

    gst-launch-1.0 rtspsrc location=$RTSPURL latency=300 ! decodebin ! clockoverlay ! autovideosink

    / to route to video device /

    rtspsrc location=$RTSPURL ! decodebin ! videoconvert ! v4l2sink device=/dev/video0

    As discussed earlier, RTSP may introduce delays but that can be optimized by using gstreamer at the source (as opposed to ffmpeg) and fine tuning the configuration for the latency and frame buffer size. please refer to rtsp-stream-from-ip-camera-delay-in-input-in-preview.

    Non RTSP alternatives

    • MPEG-TS (MPEGTS) streams using UDP push: You can use ffplay to render

    / ffplay will listen for udp packets /

    ffplay -fflags nobuffer -flags low_delay -framedrop -strict experimental udp://ip:port

    / generate the stream w/ ffmpeg from your webcam /

    ffmpeg -f avfoundation -list_devices true -i ""

    ffmpeg -f avfoundation -framerate 30 -i "1" -f mpeg1video -b 200k -r 30 -vf scale=640:360 udp://192.168.1.14:1234

    • rtp audio stream

    ffplay rtp://127.0.0.1:1234

    ffmpeg -re -f lavfi -i aevalsrc="sin(4002PI*t)" -ar 8000 -f mulaw -f rtp rtp://127.0.0.1:1234

    • WebRTC: it is more complex and requires more setup options. you would need to use a webrtc server

    * spreed: https://www.fosslinux.com/45051/how-to-install-spreed-webrtc-server-on-ubuntu.htm

    • Low-Latency HLS (LL-HLS): See this article https://www.wowza.com/blog/hls-vs-webrtc

    Using a hardware input camera

    On Linux systems, make sure your camera device is UVC compatible, otherwise you would have to locate and install the drivers. Example of Linux compatible USB webcams: Logitech C615, Logitech HD Pro Webcam C920.

    v4l2-ctl --list-devices

    HD Webcam C615 (usb-....):

    /dev/video2

    /dev/video3

    /dev/media1

    Linux Virtual Camera

    To create a virtual camera

    / v4l2loopback is the virtual camera on linux /

    sudo apt-get install v4l2loopback-dkms v4l2loopback-utils;

    / make sure the user is part of the video group /

    sudo usermod -aG video $USER;sudo reboot;

    / create /dev/video0 /

    sudo rmmod v4l2loopback;sudo modprobe v4l2loopback videonr=0 exclusivecaps=1 card_label="vcam0";

    sudo v4l2-ctl --list-devices;

    sudo v4l2-ctl -d0 --list-formats-ext

    / create two devices /

    sudo rmmod v4l2loopback;sudo modprobe v4l2loopback devices=2 videonr=0,1 exclusivecaps=1,1 card_label="vcam0,vcam1";

    / This is needed or python will not work /

    gst-launch-1.0 -v videotestsrc ! clockoverlay ! v4l2sink device=/dev/video0

    / Do not use cheese or ffplay. They cause trouble. Use gst instead /

    export DISPLAY=:0;gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! xvimagesink

    / For screen calibration /

    mimeopen -d ~/stillimage.png

    You can then pipe various sources to virtual camera device using any of toolchains described above. We recommend using gst:

    gst-launch-1.0 ximagesrc startx=0 starty=0 endx=640 endy=480 use-damage=0 ! videoconvert ! v4l2sink device=/dev/video0

    If you like to use ffmpeg see ffmpeg-send-video-to-dev-video0-on-ubuntu:

    ffmpeg -f x11grab -framerate 15 -video_size 640x480 -i :0.0 -f v4l2 /dev/video0

    If you are interest making the opencv python output available as a virtual camera source to the system refer to virtual-camera-for-opencv-using-v4l2loopback.

    GStreamer Cookbook

    To see whether and element or plug-in is installed, use

    gst-inspect-1.0 faceblur

    That plug-in come with

    apt list --installed | grep gstreamer

    FFMPEG/PLAY Cookbook

    If you need to manipulate audio and video files to match the input format, use ffmpeg:

    / generate test videos /

    ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 testsrc.mpg

    ffplay testsrc.mpg

    ~/ffmpeg -i recording.mov -vcodec h264 -acodec mp2 recording.mp4

    / extract audio /

    ~/ffmpeg -i ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/video.mp4 -q:a 0 -map a ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/audio.wav

    / list all devices on mac /

    ~/ffmpeg -f avfoundation -list_devices true -i ""

    / save output of device 1 to test-output.aiff /

    ~/ffmpeg -y -f avfoundation -i ":1" newfilename.ogg

    / record audio and video /

    ~/ffmpeg -f avfoundation -video_size 640x480 -framerate 30 -i "0:0" -vcodec libx264 -preset veryfast ~/tmp/raw.mp4

    /*

    crop

    trim, cut the middle of the video

    PTS = Presentation TimeStamp, expressed in seconds (supports partial seconds)

    */

    export FFMPEGINPUTVIDEO=~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATE_WORKSPACE/datasamples/1/video.mp4

    ~/ffmpeg -i $FFMPEGINPUTVIDEO -filter_complex "

    [0:v]split=2[copy1][copy2],

    [copy1]trim=00:05,setpts=PTS-STARTPTS[part1],

    [copy2]trim=112:120,setpts=PTS-STARTPTS[part2],

    [part1][part2]concat=n=2[out]" -map "[out]" ~/tmp/raw.mp4

    / extract audio /

    ~/ffmpeg -i ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/video.mp4 -q:a 0 -map a ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE/datasamples/1/audio.wav

    / audio conversion /

    ~/ffmpeg -i data/stero.wav -ac 1 data/mono.wav

    ~/ffmpeg -i data/test-44.1-16-mono.wav -ar 48000 data/test-48.0-16-mono.wav

    ~/ffmpeg -i data/test-48.0-16-mono.wav -c:a pcm_s32le data/test-48.0-32-mono.wav

    ~/ffmpeg -i audio/test-48.0-16.wav -acodec libvorbis audio/test-48.0-16.ogg

    ~/ffmpeg -i file.jpeg source.png

    Working with Animate Workspace Tools

    To record and generate content, use:

    ./ppm.sh "record?recordVideo" --queryObject.izyAnimateWorkspacePath=~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE--queryObject.inputVideoCapturePath=$IZYANIMATESOURCEFRONTDEVICEID--queryObject.recordVideo=True

    ./ppm.sh "analyze?analyzeVideo" --queryObject.imageRepoPath=$izyAnimateWorkspacePath/frames--queryObject.inputVideoCapturePath=~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATE_WORKSPACE/datasamples/izyanimate-1641837127-w880xh580.mp4

    ./ppm.sh "animate?animate" --queryObject.imageRepoPath=$IZYANIMATEROOT/frames/$IZYANIMATEWORKSPACE--queryObject.inputVideoCapturePath=$IZYANIMATEROOT/datasamples/$IZYANIMATEWORKSPACE.mp4--queryObject.maxFrames=10000--queryObject.frameSize=400--queryObject.waitForKeyboardOnEachFrame=True

    ./ppm.sh "animate?animate" --queryObject.imageRepoPath=$IZYANIMATEROOT/frames/$IZYANIMATEWORKSPACE--queryObject.maxFrames=10000--queryObject.inputVideoCapturePath=2--queryObject.frameSize=400--queryObject.cropInput=True--queryObject.resultsPath=$IZYANIMATEROOT/results/$IZYANIMATEWORKSPACE

    To list audio devices

    npm run getaudiodevices

    For dependencies:

    • MacOS users, the naudiodon has a dependency on node-gyp which will require command line tools to be installed. so make sure to do:

    xcode-select --install

    Platform Considerations

    • We do not recommend the Apple M1 Family:

    * M1 family does not have display scaling to 1280x720 and will slow down the system.

    * Commandline automation for mixed architecture apps will fail.

    Camera Resolution

    Most laptops ship with a 720p HD camera, so the recorder videos will end up having dimensions of 1280X720. For cameras these numbers represent the quantity of vertical pixels (p)—tiny dots of color—in a recorded image. HD refers to cameras with a standardized resolution of 720p or 1080p. The numbers 720 and 1080 refer to the horizontal resolution. Therefore, 720p HD camera resolution provides images that are 1280 x 720 pixels (that adds up to 921,600 pixels, which means a 720p HD camera is not technically a megapixel camera), and 1080p HD cameras provide 1920 x 1080-pixel resolution, or 2.1 megapixels. The HD video format also uses an aspect ratio of 16:9 (rather than 5:4 or 4:3), and the frame rate is standardized at 60, 50, 30 or 25 frames per second

    In contrast, smartphones have camera with much higher resolution and quality. Screen recordings can be megapixels (like 2000x3000).

    As of 2021, the virtual camera provided in broadcasting software offers @1280x720 resolution / 30 FPS.

    Safety and Security Notes

    • process

    * never try customizating or changing things during a live run. the audio feedback saves.

    • location and tracking

    * faraday cage is not easy to setup. wlanectomy is the preferred method.

    * vicinity can be tracked using location services, wifi, bluetooth, etc. so even devices without a gps can be geo tracked.

    * Location Services

    * Manual turn off

    * WLAN

    * wlanectomy

    * auto turn off

    * scripting: networksetup -setnetworkserviceenabled Wi-Fi off

    * OS startup setting: None

    * Bluetooth

    * btectomy

    * auto turn off

    * scripting

    * OS startup settings

    * Settings > Bluetooth > Advanced: Turn off "Open when .. detected"

    * brew install blueutil; blueutil -p 0

    External Resources

    • github
    • npmjs

    NOTE

    for more details, visit https://izyware.com

    Known Issues

    • SKP

    * does not respond when in the background for incoming request

    • CLI

    * it shoud always be running and it should automatically recycle after hours.

    * should support single audio interface stream data over high speed LAN for higher quality

    * simple tcp socket streaming to an input using the node audio lib

    * have a single interface in dashboard that shows all users and screens in one spot
    • Video
    * Image processing updates needed for using openCV python library

    * Face landmarks detection - Opencv with Python

    * Build Real Time Face Detection With JavaScript

    * Deepface Lab: from iperov

    * DeepFaceLive: Windows image

    * Paper - Face2Face - Real time Face Capture and Reenactment of RGB Videos

    * Face2Face - Real time Face Capture and Reenactment of RGB Videos

    * period check for making sure tracks are in sync (they tend to get out of sync)

    * debounce and soften transitions, otherwise jump back/forth

    * ability to have more than 2 tracks for environment switch (happy, leave, etc.)

    * switches across collection of pairs of tracks.
    • information bucket
    * add delete option for rawdata tokens (for removing accidental bad tasks)

    * add "pings" to information bucket under a bucket and rename it to "acvitibyt"

    * pings, capture, etc. should all be consumable from an analytics dashboard

    * can track across different devices (device id is 1 parameter)

    * device switch should be renamed to device actions (we had raw data before and customer feedback was negative)

    • camera broadcast tool

    * show username in foreground in white or show logo (so that it looks like Zoom)

    * use a background picture for this: gsettings get org.gnome.desktop.background picture-uri

    * npm this: pkill -f firefox;declare -x DISPLAY=":0";firefox "http://....."

    • audio
    * ws/input has the sourceAudioOption harded but that gets passed in at setupAudioInput. it should use the passed in argument.

    ChangeLog

    V7.3

    • continues at av-stream README
    • 7300001: add data-izy-circus-inner-html-status tag

    V7.2

    • 7200003: tweak dashboard to work in terminal environment

    * allow injection of dashboardcomposeConfig, dashboardkvsSchema variables from the environment.

    • 7200002: replace admincli with dashboard using izy-circus framework

    npm run dashboard

    • 7200001: update portforwarding to service framework

    V7.1

    • 7100009: kinesis - resize screen
    • 7100008: kinesis - fix bug where display size is hard coded for mouse translation.
    • 7100007: webAudio - display proper error when navigator.mediaDevices is not defined.

    * UIWebKit and like on iOS have reported this problem.

    * workaround is to use: https://github.com/cordova-rtc/cordova-plugin-iosrtc
    • 7100004: audiooutput - prevent unparsable json metadata from crashing the service
    • 7100005: mixeradmin - improve monitoring and localclose handling
    • 7100004: migrate apps/xcast.js to new service based framework and improve monitoring for jsonio
    • 7100003: socket server - improve error handling

    * subscribe to on error event and prevent the service from crashing when address is in use

    • 7100002: add quickRefresh method for remote desktop
    • 7100001: improve remote desktop and refresh algorithm for displaycrops

    V7.0

    • 7000037: implement delete for izyidman/5/pkgdashboard/tab/rawdata
    * without this the app could not be shared to non shadow users which would cut off access to value and even if the app was shared, we would have to hide none manual payload grabs and device logs for non shadow users which would be difficult.
    • 7000036: virtualcamera: reposition cropped area
    • 7000035: selenium: bug handing forever - connect to the UI and manually
    • 7000034: remotedesktop: stop refreshing when hidden
    • 7000033: service/kinesis implement yOffset

    * useful for adjusting the screen when there are screen misalignments

    • 7000032: service/kinesis/macos/nodec - update to only wobble the mouse pointer

    * previous iterations would click and change the state

    • 7000031: improve silence detection algorithm by extending the time window for looking at silent frames
    • 7000030: break up stopStreamWhenPowerIsBelow to isSilentWhenPowerIsBelow and stopStreamingWhenSilent
    * Move the features away from audioInput to socketWriterNode since script processor does not allow sending buffer of length 1, so cant do it as onaudioprocess

    * The other advantage is that it can be applied to other sources besdies audioInput (streamWaveLoop, etc.)

    • 7000029: xcastmonitory - cleanup and seperate focus function from refresh
    • 7000028: restore enforceSampleRate functionality
    • 7000027: mixeradmin - mediaStream tracks need to be stopped to clear the recording icon in the tab
    • 7000026: mixeradmin - destroyAllAudio when network disconnects or errors
    * otherwise, mic audio will keep streaming into a closed socket and will generate errors
    • 7000025: mixeradmin - reenable remote desktop and allow toggle
    • 7000024: mixeradmin - add network status to qos panel, move metrics calc into the service.
    • 7000023: mixeradmin - implement reset functionality
    * introduce new state 'localclose' for modelling user initiated closure

    * add tracking and cleanup functionality to webAudio

    • 7000022: net/websocket - generate error event when the browser does not call onclose on failed conenction

    * Some browsers (i.e. iOS 16 Safari) won't call onclose when the following happens: [blocked] The page at https://xxx was not allowed to run insecure content from ws://xxx

    • 7000021: net/websocket - send onConnectionState with state=error on bad address
    * allows better UX and error handling
    • 7000020: mixeradmin - allow updating config
    • 7000019: improve ui code quality and user experience

    * use //service/serviceName?method to query values on events. do not pass state around in the events

    * use cancelAnimationFrame for making audiovisualizer compatible with IDE
    • 7000018: implement resume button for unblocking paused audio contexts
    • 7000017: improve connection and audio stream monitoring and networking

    * capture issues related to streams in the audiovisualizer view

    * implement connection visualizer widget
    • 7000016: use service notification framework to couple the visualizer to resources. implement autoPlayPolicyWarning. implement beepAndPauseOnAudioInit for speakerContext
    • 7000015: webAudio - return the first device found regardless of name. Safari/iOS does not include Default in name
    • 7000014: service/mixeradmin/peer - fix microphone code
    * move audioinputvisualizer to top level and decouple from mic selection

    * use notifySubscribers and service subscription model

    * add code for detection suspended audioContext

    * improve Webaudio to allow parity between browser native code and non browser environment

    • 7000013: service/mixeradmin/peer - fix metadatastr collection for speaker

    * temporarily disable currentMicSilenceDetected

    • 7000012: audiovisualizer - add startDelay

    * allows audioContext to get created and be ready

    • 7000011: service/mixeradmin/peer - improve speaker code. move speaker visualizer out of selection widget.
    • 7000010: service/mixeradmin/peer - improve cleanup cycle.
    • 7000009: Webaudio - implement useNativeAudioContext

    * allows mix mode deployments between browser and nonBrowser envirnments

    • 7000008: switch the services to service?onConfig pattern
    • 7000007: utilize service?compose+config pattern for testing
    • 7000006: extract xcastSourceMonitor/peer from xcastSourceMonitor/api
    • 7000005: remove references to shim-izy-proxy
    • 7000004: move service/mixeradmin/peer to dashboard code. more infrastructure updates.

    * temporary disable stopStreamWhenPowerIsBelow, clearExistingObjects functionality

    * update webAudio with better logging

    * added documentation for externalScriptProcessorObject

    • 7000003: cleanup and use izy-proxy monitoring infrastructure. update module lifecycles by adding forcemodulereload when needed.
    • 7000002: move lib/monitoring to izy-proxy. use izy-proxy monitoring infrastructure. update service configuration definition for vatar.
    * drop the @ sign and context variable

    * use pkgModuleString and full config names

    * allows for more readability
    • 7000002: freeze V5. lockdown izy-proxy and frontend changes.
    • 7000001: cli - use newChainAsync feature and move setup out of the main loop

    V6.9

    • 6900007: Give refresh option when disconnected
    • 6900006: dashboard, increase delayThresholdMS
    • 6900005: dashboard, capture user inputs in uri

    * on mobile we often disconnect. this feature will make refreshing easier

    • 6900004: bug fix swith to whereClauseForAllVisibleCollaboratorsBasedOnGroups
    • 6900003: adjust scaling for iOS devices
    • 6900002: use stopStreamWhenPowerIsBelow on the mic. Do not send samples when quite.
    • 6900001: improve streamproto1 event handlers. add silenceDetected to the user mic. implement adminMicStatus string
    * consistent ui for mic and speaker

    * string usage will implementing cli easier

    V6.8

    • 6800019: migrate userinput/audioinput to audiostatus panel

    * decouples cloud settings from audio settings

    • 6800018: migrate userinput/audiooutput to audiostatus panel

    * decouples cloud settings from audio settings

    • 6800017: send only keepAlive packets when detecting silence in the input audio

    * stopStreamWhenPowerIsBelow property is implemented for nonBrowserAudioInputNode and is configurable based on the volume

    * onAudioPacket is implemented for socketReader to indicate silence in the monitoring panel

    * customer issues addressed:

    * customers have reported that on slow internet connections the speaker gets too much delay
    • 6800016: selenium - run the service on userspace port after izy-proxy upgrade
    • 6800015: audio - improve monitoring and thresholds
    • 6800014: qos - add audio visualizer and glancable status for audio streams
    * minimal design will improve usability
    • 6800013: remotedesktop - single select user when mouse click on user desktop
    * improve the focus workflow
    • 6800012: remotedesktop - organize keyboard and refresh into inputOptions. add minimal design
    * minimal design will help monitoring on mobile
    • 6800011: remotedesktop - improve refresh rate
    • 6800010: remotedesktop - adjust layouts. add new shortcuts
    • 6800009: remotedesktop - allow single or multiuser selection fronend
    • 6800008: remotedesktop - implement multiuser frontend api
    • 6800007: add autoRefresh option
    * allows glancable monitoring
    • 6800006: implement ability to hide selection portions of ui
    • 6800005: kinesis - implement waitForUIUpdateMS

    * improve ui responsiveness for inputless monitoring

    • 6800004: kinesis - optimize slack view.
    • 6800004: selenium - improve title logging. fix select all selector ember65
    • 6800003: add total uptime metric
    • 6800002: add labeling and colors for mic metrics
    • 6800001: add labeling and colors for speaker metrics

    V6.7

    • 6700011: add enforceSampleRate to audio input

    * security feature for when the device is non standard

    • 6700010: implement adjustSampleBufferSizeToNewFrequency with dynamicResampleRate

    * useful for when the data source has hardware bugs. For example GHW USB AUDIO would not sample at 48khz and would return a stream at 48.1khz.

    • 6700009: lib/webAudio - bring scriptProcessor on par with W3C Web Audio API
    • 6700008: kinesis - escape input string
    • 6700007: kinesis - dismiss cert warning fix coordinates
    • 6700006: qos- add netRate for admin speaker
    • 6700005: audioinput- implement crashAfterTimeSeconds

    * this can be utilized as a workaround for the memory leak in the workstationaudioout. customers had reported memory overnight going to 6Gig.

    * overtime, this would introduce creeping delays into the speaker which overtime will go up to multiple seconds.
    • 6700004: audiooutput - clear up legacy code. use IzyScriptNode with getWrapper functionality. add currentSpeakerBufferDepthInFloat32Samples metric.
    * Allows wrapping of WebAudio scriptNode by an IzyScriptNode for connecting to sockets and measuring metrics.
    • 6700003: add test scripts for adminmictoworkstationaudioin.sh and workstationaudioouttoadminspeaker.sh
    • 6700002: capture onConnectionStateOutcome and display to qos. add aggregatorStatus to qos. improve metadata string exchange mechanism for protco1. implement sendKeepAlive for socketWriter. add consoleMode to service

    * keepAlive useful for no source is connected but the QOS metrics need to be send. this will make the feedback independent of workstationaudioout and debugging easier.

    * consoleMode allows live inspection of QOS data

    * add useAudioCopy mode for legacy workstationaudioout clients

    • 6700001: dfl - engineConfig.appDisplayCropConfig

    * allows the user to customize which part of the display to project

    V6.6

    • 6600009: audiooutput - implement skipScriptNodeForSocketToSpeakerData

    * allows better performance for non-browser environments

    • 6600008: audioinput - improve diagnostic beep.
    • 6600007: mixeradmin - improve logging for cleanupExistingConnection
    • 6600006: monitoring - add compact day-time timestamps to logs
    • 6600006: virtualcamera - allow using kernel device as the video source vs. rtsp
    * provides better performance and less delay
    • 6600005: implement service/selenium
    • 6600004: audiooutput - unfocus the user when peer disconnects

    * without this feature, the admin will still see that the user is "focused" and will assume that the audio is getting through which is incorrect. the delay will start increasing, but it would be unclear why.

    * it will introduce a permanent delay to the next connection which is not captured by the admin dashboard delay (possibly due to clogging on the admin client but the reason is not entirely clear yet). This is really bad because it will ruin the confidence that user might have in the admin cli dashboard.
    • 6600003: audiooutput - implement beepAndPauseOnAudioInit
    * useful for troubleshooting hardware issues and connection

    * provides smoke test for audio filters

    • 6600002: dfl - add modprobe command to startup sequence

    * ensures that service can be started from a reboot with no issues

    • 6600001: portAudio - provide workaround for the blocking input capture when the device is the system default on MacOs

    V6.5

    • 6500020: virtualcamera - show user warning message
    • 6500019: dfl - map dflConfig.videoInput to /dev/video0
    • 6500018: useAudioCopy - fix the bug where deviceId of zero is not recognized
    • 6500017: cli legacy - fix bug in remoteaccess/synccode.sh where it fails when plat folder is missing
    • 6500016: cli legacy - allow remoteaccess/runapp.sh to specify config

    remoteaccess/runapp.sh service dev|...
    • 6500015: webAudio - use deviceNameGrepStr for selecting audioinput
    • 6500014: virtualcamera - implement video source from rtsp for dfl
    • 6500013: virtualcamera - implement dfl engine
    • 6500012: virtualcamera - allow engineConfig and gstPipelineConfig
    • 6500011: virtualcamera.directvideo - scale video to full screen size, implement feedcamerartsp

    service/virtualcamera/feedcamerartsp.sh ubuntu-xxx

    • 6500010: virtualcamera - implement directvideo engine
    • 6500009: improve admin cli and add hot keys for fast user switching
    • 6500008: implement admincli for alternative to the browser interface

    npm run admincli queryObject.queryObjectId ../queryObject.xcast.xcast.json queryObject.focus

    • 6500007: implement QOS visualizer component
    * usable from the cli and gui interface
    • 6500006: implement destroyAudioDevice and clean up devices when socket disconnects.
    • 6500005: xcast - implement cleanupExistingConnection to customize admin reconnects
    • 6500004: networking - allow turning off reconnect for sockets. fix nodejs websocket client code.
    • 6500003: Implement QOSOutputMetrics for node outputs to improve performance monitoring. Fix the input double start bug.
    • 6500002: improve convertArrayBufferToFloat32 and add support for 32BitPCM
    • 6500001: implement outputDirectlyViaFFPlay

    * useful for bypassing naudio interface and debugging networking performance issues

    V6.4

    • 6400016: admin dashboard - improve QOS visualization
    • 6400015: disconnect and restart xcast when multiple admins try to connect
    * temporary solution to minimze multi admin audio issues
    • 6400014: fix timestamp encryption algorithm by using Floaty64 datatype
    • 6400013: implement streamproto1 for audioinput.

    * implement filters for audioStreamBetweenNodesLogAllStreamData and audioStreamBetweenNodesGrepStr

    * add buffering to make sure all chunk sizes and delays are handled properly

    * add magic number to header to detect corruptions

    * QOSMetrics will be based on packets and sequence sequence numbers after this change

    * implement simulateChoppyChunks to test audio transmission over low quality networks

    • 6400012: admin console - use socketWriterNode to stream microphone audio

    * allows for overlaying protocol metadata and performance metrics inside the audio stream

    • 6400011: implement streamproto1 for sending metadata over audio stream

    clear && printf '\e[3J';npm run testservice queryObject.service testmixeradminspeakerxcast@xcast,testmixeradminspeakerenduser@peer;

    • 6400010: lib/monitoring - improve format and add support for debug info by extraInfoInLogs
    * turning on extraInfoInLogs will generating call stack information

    * formatting the logs such that they are consumable from terminal

    • 6400009: consolidate mixeradmin/peer and admin dashboard. implement nonBrowserDebugNode, dataStreamMode, streamProtocol and QOS metrics.

    * dataStreamMode (streammodecooked, streammoderaw): raw will be useful for highperformance relaying of streamins without inspecting the contents

    * streamProtocol (streamproto0, streamproto1): allow protocol implementations and versioningover cooked streams

    * enableQOSMetrics: collection and sending of QOS metrics

    • 6400008: improve net/websocket lifecycle managemens, events and closing mechanism
    • 6400007: admin dashboard - add shim-izy-proxy for newChain and use net/websocket and lib/globals
    * more parity with xcast components

    * enables easier testing and implementation of cli interface

    • 6400006: admin dashboard - add audio visualization for output. add config for streamProtocol definition
    • 6400005: move audio/lib to lib/audiosignal and use in admin dashboard. utilize audioDeviceObject in admin dashboard state management
    • 6400005: admin dashboard - add audio visualization

    * helps quickly determined whether the audio is connected and working

    • 6400004: admin dashboard - improve keyboard input experience

    * added icons and removed extra text

    * group and hide controls and organize in layers
    • 6400004: admin dashboard - move audio controls to the settings area
    • 6400003: admin dashboard - move logview to the settings area
    • 6400002: lib/monitoring - allow registering monitoringIngestionService
    * useful in connecting the logger to ui front-ends or cloudwatch or any other log monitoring provider
    • 6400001: admin dashboard - create settings modal and move source selection to settings
    * reduce clutter on the main dashboard

    V6.3

    • 6300038: implement resetworkstation feature
    • 6300037: implement runCmdBeforeStart to allow custom shell commands

    * this will allow controlling of device volume on service startup

    • 6300036: rename virtialspeaker to audioout and create workstationaudioin
    • 6300035: admin dashboard - implement ctrlc
    • 6300034: implement deviceNameGrepStr for IzySpeakerNode
    • 6300033: useAudioCopy for audioinput as an interim solution
    • 6300032: implement AudioNode disconnect. implement dataStreamType for socket and onRawSampleCB data connectivity option.

    * support Float32 and 16BitPCM datastreams

    • 6300031: add support for audio copy tool to act as an audioinput service

    node tools/acp.js 1 :10003 0 1

    node tools/acp.js :10003 6 0 1
    • 6300030: implement IzyAudioInputNode
    • 6300029: allow prefixing the app config from cli

    node apps/workstationaudioout.js dev

    • 6300028: fix dead socket reconnect bug. include workstationaudioout in standard test

    clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer,virtualcamera@peer,workstationaudioout@peer

    • 6300027: fix bug where reConnect would flood the socket with requests
    • 6300026: implement audio copy - tools/acp for quality assurance on the client machine

    node acp 1 2 0.01

    • 6300025: cleanup mixer control panel and improve usability by adding verbose mode. unified peer and front-end jsonio access by using lib/jsonio
    • 6300024: rename webaudio service to mixeradmin and fix broken dashboard deps
    • 6300023: dashboard - use default for sourceSelector
    • 6300022: migrate izyidman/5/pkgdashboard/tab/xcast2 to izyidman/5/xcast/dashboard
    • 6300021: kinesis frontend - add fullscreen mode
    • 6300020: audio mixer - send empty buffers when channel is muted

    * this will address the blips and audio glitches

    • 6300019: implement userCmdToSendBeforeEachScreenCapture to customize the behavior
    • 6300018: remova experimental features for padding the audio with silence
    * adding sine wave via amix filter: -f lavfi -i "sine=frequency=480:duration=50000000" -i inputFifoPipeName -filter_complex amix=inputs=2:duration=first,...

    * the output stops when 1 of the inputs stop

    * adding since wave via amerge: amerge=inputs=2

    * the output stops. it will also resample to the sine frequency and cause disortion.

    • 6300017: move statusmonitor to xcast service
    • 6300016: implement apps/workstationaudioout
    • 6300015: implement audioinput service and mixWorkstationAudio mode for mixer admin audio aggregator

    clear && printf '\e[3J';npm run testservice queryObject.service ajmixermixer@xcast,ajmixeradmin@peer,workstationaudioout@peer

    • 6300014: add suppoer for tcp sockets to nonBrowserSocketWriterNode class
    • 6300013: decouple service type from name and implement type property for service definitions to allow defining multiple services of the same type

    * allows defining context specific naming for services

    • 6300012: kinesis service - fix bug for findVNCSession
    • 6300011: xcast kinesis ui - ability to switch between users
    • 6300010: xcast ui - pass userId as parameter
    • 6300009: fix bug for when ffmpegAudioInputDeviceId is zero
    • 6300008: place the fifo pipes in tmp directoy
    • 6300007: implement useOscillatorToCoverGaps flag for virtual speaker
    • 6300006: adding -re to ffmpeg input does not make a difference
    • 6300005: fix the clicks, pops and gaps noise for audio generated using aresample(libswresample) using async parameter
    * most likely due to gaps introduced when the input streaming since the timestamps for the frames get distorted

    * utilize async and usewallclockas_timestamps

    * async relies on the timestamps of audio. For audio that comes from a video stream the timestamps are present but for a raw feed which has no timestamps, ffmpeg assigns sequential timestamps irrespective of when the audio packet is received regardless of the actual time the audio samples arrive. adding usewallclockas_timestamps will force using the system clock.

    * see also this post for more information https://stackoverflow.com/questions/20439640/ffmpeg-audio-transcoding-using-libav-libraries

    • 6300004: migrate virtualcamera to service oriented architecture

    clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer,virtualcamera@peer

    • 6300003: add downx10 and upx10 to xcast frontend
    • 6300002: reorganize xcast2/userinput/audioinput/api.js for consumption from serice oriented components.
    • 6300001: update the test audio for xcast frontend.

    V6.2

    • 6200055: fix issue with xdotool mouse click by adding delay between move and click.
    • 6200054: improve createWaveHeader to accept zero for totalSamples
    • 6200053: implement deviceNameGrepStr for virtualspeaker
    • 6200052: implement logdata feature for portforwarding
    • 6200051: refaactor and cleanup jsonio
    • 6200050: handle OPTIONS requests for JSONIO service
    • 6200049: implement legacyAudioStream for allowing interoperability with legacy xcast audiostreams
    • 6200048: improve jsonio@xcast error handling
    • 6200047: improve configurability and service structure for kinesis. add support for tigervnc client, pikvm and realvnc client.

    * pkvim does not support mouse over HTTP API. it is only supported over the ws streaming protocol.

    • 6200046: fix bug in service jsonio/xcast
    • 6200045: update runapp.sh to launch items from apps folder.

    runapp.sh hostfolder kinesis

    • 6200044: implement shelllisten.sh and shellreverse.sh for quick access and setup.
    * useful for quickly setting up a headless machine.

    * make sure to setup the bash environment on the target environment.

    vim ~/.bashrc

    export PATH=$PATH:/Applications/MySQLWorkbench.app/Contents/MacOS;

    export PS1='\w>';

    • 6200043: introduce connectionManagementService feature for manually handling the connection. implement portforwarding service.

    * testxcast was updated to utilize portforwarding

    clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,portforwarding@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer

    • 6200042: implement kinesis service

    clear && printf '\e[3J';npm run testservice queryObject.service kinesis@xcast,kinesis@peer

    • 6200041: fix bug in networking. clean up unused interfaces. new test model using testservice.
    * testxcast will be replace by

    clear && printf '\e[3J';npm run testservice queryObject.service xcast@xcast,mixeradmin@peer,virtualspeaker@peer,jsonio@peer

    • 6200040: migrate jsonio api to xcast/peer model. improve logging and add jsonio peer module.

    * onNewConnection will return dataBinding for socket which could be set to "manual"

    • 6200039: implement for virtual speaker enforceLegacyAudioServiceType

    6200038: implement apps/.dev daemons

    • 6200037: service launch - support servicename@context schema
    • 6200036: audio/file/input - allow specifying custom path for the input file
    • 6200035: implement apps/virtualspeaker.js and apps/xcast.js as nodejs daemon
    • 6200034: reorganize modules lifecycle and config for improve testability. add reference config test/queryObject.xcast.json and use augmentQueryObject for config startups

    clear && printf '\e[3J';npm run testxcast

    • 6200033: lib/webAudio - implement new extensions and improve metrics and diagnostics collection
    * nonBrowserSocketWriterNode

    * nonBrowserAnalyzerNode

    * createNodeFromXCastConfig: to allow data driven audio graph creation

    * implement AudioNode.setName function for tracing data

    • 6200032: diagnostics: add probePCM16 to debug tools probeFloat32. Improve logging.
    • 6200031: convert audio/stream to generic ffmpeg interface for audio/ffmpeg replacement
    • 6200030: consolidate audioDeviceObject creation to lib/webAudio. create generate audio/stream using ffmpeg
    • 6200029: lib/webAudio - implement nonBrowserSocketReaderNode
    • 6200028: virtualcamera add setupMachineXSession script and move environment configs into queryObject.xcast.json
    • 6200027: lib/shell -- allow sudo mode
    • 6200026: userGUIBackend - sent neutural key on each screen scan

    * useful for preventing the screen saver from coming on

    • 6200025: client - add zoomLayout switching
    • 6200024: client - improve enumerations and styles for zoom regions and shortbuts
    • 6200023: client - implement izyremotecmd_ and useful keyboard shortcuts

    // tip: use xev for calibration and to see exactly what needs to happen

    xev -geometry 600x600

    • 6200022: client - implement GUI zooming feature
    * allows monitoring from mobile
    • 6200021: improve mouse and keyboard actions. use relative coordinates for scalability. handle source outages gracefully.
    • 6200020: implement lib/monitoring.chain processor. improve large number formatting using formatFriendly
    • 6200019: refactor client - implement apps/virtualcamera for daemon process management.
    • 6200018: refactor usersGUIBackend - add X system startup scripts and make config driven. create apps/users-gui-backend for daemon process management.
    • 6200017: lib/shell - implement kill method
    * kill processes using grepStr
    • 6200016: xcastfe - select system default for input on startup
    • 6200015: xcastfe - select system default for output on startup
    • 6200014: xcast - turn off allowNonSystemDefaultSelection
    * device selection does not work on all devices.
    • 6200014: xcast - implement bi-directional graph tracking. implement close method for cascade releasing resources
    • 6200013: xcast - improve setupAudioGraph, logging and buffering. import tab/xcast2/lib/monitoring
    • 6200012: implement end to end tests for audio streaming (testwebaudio)

    clear && printf '\e[3J';npm run testwebaudio queryObject.verbose.forceUpToLevel 2 queryObject.verbose.transport true queryObject.action streamInputDeviceAsWave|streamWaveLoop

    • 6200011: rewrite client/webaudio/net to use lib/webAudio
    • 6200010: improve the webAudio shim and other libraries

    * implement nonBrowserAudioBuffer (non-interleaved IEEE754 32-bit linear PCM)

    * implement nonBrowserSocketNode, nonBrowserScriptProcessor, createIzySpeakerNode

    * add audio/lib.convertFloat32to16BitPCM to library. implement socket

    • 6200009: lib/monitoring - fix bug for counting the data volume
    • 6200008: implement wave stream conversion using ffmpeg

    npm run teststreamconversion queryObject.verbose.forceUpToLevel 2 queryObject.verbose.audioInputSample true queryObject.verbose.streamConversion true queryObject.sampleRate 16000

    • 6200007: implement lib/webAudio universal audioContext and audioNode interfaces.
    • 6200006: implement stream convertor using ffmpeg
    • 6200005: codec.wav - implement createWaveHeader
    • 6200004: import lib/stream. use loopStreamSamplesArray
    • 6200003: create lib/stream.loopStreamSamplesArray. add .log to monitoring
    • 6200002: implement sendMetadataOnAudioSwitch. Improve logging.
    • 6200001: introduce DEV1000320003 connection config for implementation tasks

    V6.1

    • 6100046: switch to using global.applicationState.focusUserId for focus model

    * storing the volume in server (not connection object) so when restarting the client for testing the no reclicking and reconfiguring is required and the user can continue where they left off.

    • 6100045: implement smooth transition for loop changes

    * it will act as a deboucing so that for short sentences we dont get a loopy jump.

    • 6100044: virtualcamera: blur frames when not ready. refactor looper and add unit tests.

    $IZYPYTHONROOT/python3 test.py

    • 6100043: test/client: implement mutePattern feature
    * useful for large media when muteRanges can be hard to pinpoint
    • 6100042: video - implement prototype dual bi-directional loop. remove seek command
    • 6100041: fix examineAudioSignal. provide auto blurring features.
    • 6100040: debug/audio - auto detect system speaker
    • 6100039: add data/test-48.0-32-mono.wav and instructions
    • 6100038: improve net/broadcast, use standard monitoring library. support ws://address format. introduce streamToAllUsers flag. fix bugs
    • 6100037: virtualcamera - implement audio stream connector

    * this is needed for feeding the audio from the virtualcamera client and having synced audio/video

    • 6100036: codec/wave: use allowExperimentalWaveParser to allow format code 65534
    • 6100035: improve audioDeviceObject initial sequence amd include streamProperties
    * allows for consistent monitoring interface

    * randomize the deviceId to avoid collision

    • 6100034: implement clienttest cli

    * useful for testing audio sync, video quality, etc.

    * useful for testing data extractions, etc.

    * works as an standalone client, with no networking required

    npm run clienttest queryObject.client.type virtualcamera queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE

    • 6100033: create client/virtualcamera

    * improve the seperation between networking and client logic layers

    * improve the client programming model and queryObject schema

    npm run client queryObject.address broadcastip:port queryObject.client.type virtualcamera queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE queryObject.verbose.networking true

    • 6100032: cv/ fix bugs in libraries
    • 6100031: refactor convert16BitPCMToFloat32. Add analyzewave command. Improve debug/audio library.

    npm run analyzewave queryObject.user.id $IZYANIMATEUSERID queryObject.user.izyAnimateWorkspacePath ~/izyware/izy-idman-tools/id/$IZYANIMATEUSERID/virtual/izyanimate/$IZYANIMATEWORKSPACE

    • 6100030: add ability to mute ranges in audio/file.
    • 6100029: refactore audio/file and create libs for extracting audio properties
    • 6100028: ipc - improve error handling for setupWriter
    • 6100027: turn off video when no inputVideoCapturePath is passed
    • 6100026: webaudio - increase thresholdForSendingData
    * on faster backend systems, the default ffmpeg data send can drop to 2k per callback. there is a front-end bug that causes this to happen/
    • 6100025: implement GUI access via remote desktop and vnc

    npm run usersGUIBackend queryObject.address $IZYANIMATESOURCEADDRESS queryObject.id $IZYANIMATEUSERID queryObject.verbose.networking true

    • 6100024: findAudioDeviceIdFromName - fix bug when parsing ffmpeg output
    • 6100023: use shortConnectionsStatus for saving space on mobile devices
    • 6100022: reaplace deviceId with audioInputAddress.
    • 6100021: add verbose.allErrors flag to always display errors regardless of verbose level
    • 6100020: implement stream monitoring tool. improve logging.
    • 6100019: xcast - fix connectionId memory leak
    * remove all the collaterial when the client closes the socket
    • 6100018: implement remoteaccess/download.sh
    • 6100017: implement logview component for collecting metrics. optimize audio processor buffer size to avoid networking delays for audio input streaming
    • 6100016: ffmpeg - calculate actual samplerate for the stream. allow looking up of device ID using device name
    • 6100015: audioinput - break testaudio into smaller chunks, cleanup logs
    • 6100014: xcast - return full status when jsonio focus action
    • 6100013: consolidate select identity into monitoring widget
    • 6100012: update configs to use izyware.com
    • 6100011: wssocket - use p5/ prefix and use url to determine service type
    • 6100010: webaudio - fix scoping issue
    • 6100009: audio input - play test audio in a loop
    • 6100008: minor tweaks to id picker and auto refresh
    • 6100007: implement e2e webaudio and decouple the transport from the ws audio device.
    • 6100006: implement proper error handling for readFile JSONIO
    • 6100005: implement the ability to select the source server.
    • 6100004: implement the ability to route audio input directly to output. embed sample data to allow networkless operation.
    * useful for testing devices without networking connections.
    • 6100003: improve audio output panel ui
    • 6100002: only warn on samplerate mismatch and conitnue. clear up the local and cloud config.
    • 6100001: improve audio device lifecycle management. imlement destroyAudioInput for ffmpeg device and destrot when remote socket closes

    V6.0

    • 6000020: implement wss and webaudio services

    npm run wss queryObject.verbose.networking true queryObject.address ws://localhost:20110 queryObject.verbose.audioInputSample true queryObject.verbose.audioInputLogginIntervalSeconds 2 queryObject.mode streamInputDeviceAsWave|streamWaveLoop

    • 6000019: improve variable names and handling
    • 6000018: lib/ipc - implement rawPipe type

    * allows for connecting to binary streams from other applications using a fifo pipe

    • 6000017: implement front-end piece to webaudio client type
    • 6000016: implement seperate cli interface for partial content service

    npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode streamStaticFile queryObject.fileName test-48.0-16.wav

    npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode resizeAndStreamWaveFile queryObject.fileName test-48.0-16.wav queryObject.factor 5

    npm run partialcontent queryObject.address localhost:10003 queryObject.verbose.audioInputSample true queryObject.verbose.partialContent true queryObject.mode streamInputDeviceAsWave

    • 6000015: implement front-end ux for partial content streaming
    • 6000014: implement audio output streaming using the ScriptProcessorNode
    • 6000013: cli - add shell script for automating vnc access
    • 6000012: implement partialcontent (406) client type and wave audiocodec

    * would allow streaming audio and video into the native browser without special javascript.

    • 6000011: load sample only if window.audioData is not set.
    • 6000010: refactor existing wav streamer to file/input. support readFile JSONIO command.
    • 6000009: add allowNonSystemDefaultSelection. Implement playing audio file from the source server and test audio feature. add UI for selecting sourceType for audio output. Implement activeAudio.onSocketMessage for streaming back from server. Add optional refresh button.
    • 6000009: fix logging for ws/input. fix bugs.
    • 6000008: implement identity userinput. improve UX for user switching.

    * move the widget to top

    * improv refresh rate so that feedback is immediate.
    • 6000007: implement generic audio streaming interface to xcast and allow echo testing.
    • 6000007: improve socket connection management for xcast2
    • 6000006: implement xcast2 audio capture switching, release resouces and memory leaks and add logging.
    • 6000006: update xcast2 and improve the UX for audio device selection and source monitoring. add support for websockets
    • 6000004: update remote access tools and add vpn and vnc scripts.
    • 6000003: xcast2 - prototype websocket audio input (ws/input)
    • 6000002: prototype websocket audio input (ws/input)
    * allows connecting an audio device from the browser to the source
    • 6000001: initial izy-xcast fork from izy-idman-tools xcast

    V5.7

    • 5700030: update README
    • 5700029: fix tests
    • 5700028: implement mixer1 reciever type

    * uses audio detection to animate videos

    • 5700027: support backwards compatibility with v1 client and make recieverType mandatory for recievers. support new client types and new the XCAST2 dashboard. include audioMetadata in the video frames.
    • 5700026: make inputVideoCapturePath optional for video input and make label generation consistent with videoOutput model
    • 5700025: incorporate the reciever scripts into syncdata
    • 5700024: allow postFrameReciept to manipulate the displayConfig
    * useful for quick test development cycle
    • 5700023: displayConfig should default to display size when face property is not specified
    * useful when broadcasting full size videos
    • 5700022: remoteaccess scripts, add config/sourceinformationbucketid
    • 5700021: make videoGenerationMode optional for the reciever and default to the broadcaster
    • 5700020: augment metadata for video capture. improve visualization library
    * include captureTimestamp and frameCount
    • 5700019: use outcomes to handle postFrameRecieptSetup
    • 5700018: add delay to ipc error handler exit to allow child process error output time.
    • 5700017: improve variable and filenames.
    • 5700016: reorganize facepicker for better modularity
    • 5700015: modularize the videogenerationmode and introduce generic applyFeatures and extractFeatures
    • 5700014: implement izyProxy.ldfn using the launchString syntax

    * allows using izyProxy.ldfn('path?fn') in python

    • 5700013: handle IPC post video capture open. return exit code 1 on node fifo error.
    • 5700012: Do face tracking and auto zoom for the first frame only. Loop playback when playing from file
    * We shouldnt block streaming when the face detection fails. otherwise when the user moves off camera the reciver will see the user still sitting there paused.
    • 5700011: return proper exit code when ppm.sh fails
    • 5700010: default to direct when videoGenerationMode is not speced
    • 5700009: improve error handling when frames have magic header
    * skip over the frames instead of crashing the app
    • 5700008: improve remote access scripts and allow launch without copy
    • 5700007: implement remote access scripts
    • 5700006: improve video recorder functionality
    • 5700005: implement FOMM videoGeneration
    • 5700004: implement custom videoGenerationMode types. improve labeling. add flag for downsampleInputVideoFrames.
    * allows extensibility for frame generation modules
    • 5700003: add support for NVIDIA GPUs
    • 5700002: automate VNC connections
    • 5700001: improve testvideo messages and text

    V5.6

    • 5600041: add removeBackground flag for displayConfig
    • 5600040: fix workspace loader bug
    • 5600039: improve auto crop and zoom. allow defining videoInputConfig.json file
    • 5600038: add offsetH to displayConfig
    • 5600037: implement directVideo streaming for more flexibility in deployment and scenario tests

    * queryObject.videoMode captures this method

    • 5600036: pyjs ipc - fix readLine bug
    • 5600035: add queryObject.inputVideoCapturePath to testvideo for allowing full e2e testing on the same device
    • 5600035: implement autocrop, calibration, background removal. apply workspace displayConfig during record

    * applying workspace displayConfig during recording will make adjustment easier

    * autocrop will get rid of passing x-y dimnetions for cropping which can change with user position

    * calibration will allow normalization of feature detection and is neccessary to make animation work across different shapes and geometries

    * background removal using chroma key compositing (this is simpler that Selfie Segmentation using mediapipe, but requires more work with the screen setup).
    • 5600034: automatically adjust brightness. add sleep command

    npm run sleep

    • 5600033: add more tests
    • 5600032: bug fix - update serialization algorithm
    • 5600031: add more logging. improve pyjs interop system
    • 5600030: bug fix - replace hardcoded id
    • 5600029: vide - allow display configuration and background image
    • 5600028: python integration - adjust exit codes based on verb outcome

    * failed outcome will return non zero exit code.

    * useful for ipc across apps and catching errors.
    • 5600027: audio -introduce virtual python audio output device. improve testing and audio handling.
    * allows access to scipy and audio processing libraries
    • 5600026: utilize more environment variables and use ~/izyenvironment.sh
    * streamlines the deployment to multiple environments
    • 5600025: improve ipc and shellexec communication and process lifecycle management.
    • 5600024: reciever - honor the queryObject.focus flag on each reconnect.

    * useful for debugging when the server crashes

    • 5600023: fix broken tests and typo
    • 5600022: allow padding file urls as audio device (deviceId file://sample32khz16.wav)
    * useful for testing and streaming music to consumers
    • 5600021: streamer - fix bug with frame flow control
    • 5600020: video out - kill existing python processes on startup

    * this will remove lingering video windows from a previous crash

    • 5600019: audio out - add queryObject.testOutputOnStartup for recievers
    • 5600018: video out - allow window placement
    • 5600017: implement the ability to runAs the video output, add videotesting

    * useful for broadcasting applications where control and video are seperate desktops

    * to test runAs

    npm run videotest queryObject.asUser bg

    • 5600016: python izyProxy - return empty queryObject when no cli parameters are provided
    • 5600015: add integration for cv video with vatar broadcast system
    • 5600014: remove ipc from animate. clean up libraries.
    • 5600013: extend the broadcast frame to include metadata
    • 5600012: report time in local time zone
    • 5600011: add integration tests for broadcast

    npm run testbroadcast

    • 5600010: cleanup setupVatarToCV IPC configuration
    • 5600009: integrate CV luanch into source server. add IPC.
    • 5600008: cv - add lib/config
    • 5600007: cv - remove dead code. update gitignore.
    • 5600006: add computer vision recorder, analyzer and animator library.
    * input averaging: add input low pass filter by averaging the frames for flicker reduction

    * debounce the change detection and define a threshold for change.

    * optimize framerate by introducing delay

    * output averaging: remember the previous pick and allow smooth transition on updates

    • 5600005: add mouse emulator.
    • 5600004: add statusmonitor. implement jsonio xcast server interface
    • 5600003: remove startvirtualcam from OBS cmdline
    • 5600002: implement audio mixer
    • 5600001: implement camera broadcast tool

    * improve frame speed rate

    * black background

    * white background will generate glare in dark room

    * disable password prompt and quitly stop

    * automatic login

    * build processs

    rm ../camera.zip; zip -r ../camera.zip . -x .git/\ -x node_modules/\ .

    servers/ubuntu_/putfile.sh ../camera.zip .

    servers/ubuntu_/shell.sh

    unzip -o camera.zip -d camera

    V5.4

    • 540006: Add placeholder.png for customization
    • 540005: Put the focus back on the tab (not the newer ones) that has the switcher.
    • 540004: Allow search by memo field
    • 540003: Update tabs lables
    • 540002: Disable client sync by default

    * unstable client connections will not disturb the application

    • 540001: Add memo to cliconfig.api

    V5.3

    • 530020: Add description to cliconfig.api. remove set description from actions.
    • 530019: Update virtual shell prompt to use the mandatory fields id and description
    * This will guarantee that the prompt will always be user friendly
    • 530018: Add sessions and more robust connection recovery. Add more tests.
    • 530017: IzyVatar features
    • 530016: remember tabs on context switch
    * improves workflow efficiency
    • 530015: when switching update the snapshot and count updates
    * improves workflow efficiency
    • 530014: add reswitchBrowserContext and store contextSwitch Ids in the browser session
    * this will allow for implementation of quick context switching in the shells
    • 530013: implement add empty context
    * useful for starting a new session with new account
    • 530012: add device tracking for context switching
    * deviceswitch schema is introduced
    • 530011: add contextswitch functionality
    • 530010: add setdescription functionality
    • 530009: expose memo field in the inspect view
    * replaces cptools functionality
    • 530008: add izy-loadobject nodejs-require for locating the templates relative to dirname.
    • 530007: add RAWDATA browsing and import orgnized by content bucket.

    * allows for aggregating view on devices and other areas.

    • 530006: create apps/devicemanager/5/dashboard and add support for "switching" using raw_tokendata.

    * replace legacy device management raw_tokendata management

    * replaces cptools functionality

    * improves device context capture using 'browsercontext5' and referencing rawtokendata_id

    * legacy cptools was done via the 'cptoolsuserid' cookie in cptools/pkgdashboard/admin.
    • 530005: Improve visualization for collaboration fields.
    * generates the link for the izyshell.sh and replaces the legacy cptools
    • 530004: Add support for AWS shell
    • 530003: Add support for IZYDEFAULTUSER
    • 530002: pkgdashboard add support for aws_credentials. control access based on collaboration features.
    • 530001: clone "viewer/view" from "viewer/top" for compatibility with modern shells
    • include and add reference to Dockerfile in package

    V5.2

    • implement action/cliconfig for izy-idman-tools
    • use IZYAUTHORIZATIONTOKEN environment variable
    • add description field as the first step in adding identity context
    • link ~/izyware/izy-idman-tools/id/<>/Desktop to izyidman records
    • move izyidman/5/pkgdashboard to seperate workspace

    V5

    • izyshell.sh: add optional port mapping
    • desktopautomation: launch the fragile apps at the end of sequence
    • use require('os').homedir() for the shell root directory
    • izyaws: use $HOME/izyware/izy-idman-tools/id for root
    • add applescript modules for automation
    • add schema for vmuser
    • add IZYSHELL_COMMONDIR
    • izyaws: pass more parameters
    • add izyaws.sh
    • add NPM_TOKEN to bashrc
    • Add dante source code for reference.
    • Add support for tunneling UDP through socks
    • Add support for openVPN

    rtsp-stream-from-ip-camera-delay-in-input-in-preview: https://obsproject.com/forum/threads/rtsp-stream-from-ip-camera-delay-in-input-in-preview.112518/

    ffmpeg-send-video-to-dev-video0-on-ubuntu: https://askubuntu.com/questions/881305/is-there-any-way-ffmpeg-send-video-to-dev-video0-on-ubuntu

    how-to-install-the-mpeg-4-aac-decoder-and-the-h-264-decoder: https://askubuntu.com/questions/214421/how-to-install-the-mpeg-4-aac-decoder-and-the-h-264-decoder

    virtual-camera-for-opencv-using-v4l2loopback: https://arcoresearchgroup.wordpress.com/2020/06/02/virtual-camera-for-opencv-using-v4l2loopback/

    wowza-streaming-protocols: https://www.wowza.com/blog/streaming-protocols

    nvidia-driver-installation-guide: https://docs.nvidia.com/datacenter/tesla/driver-installation-guide/

    ubuntu-installation

    nvidia-container-toolkit: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

    docker

    cuda-installation-guide-linux: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

    ubuntu-installation

    nvidia-container-toolkit: https://github.com/NVIDIA/nvidia-container-toolkit

    cuda-compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/

    official tenstorflow installation: https://www.tensorflow.org/install/source_windows

    Pytorch not working on Python 3.10: https://discuss.pytorch.org/t/pytorch-pip-installation-not-working/134884

    Deepface Lab: https://www.youtube.com/watch?v=tW7EENTWXRk

    DeepFaceLive: https://github.com/iperov/DeepFaceLive

    Build Real Time Face Detection With JavaScript: https://www.youtube.com/watch?v=CVClHLwv-4I

    Face landmarks detection - Opencv with Python: https://www.youtube.com/watch?v=MrRGVOhARYY

    Paper - Face2Face - Real time Face Capture and Reenactment of RGB Videos: http://www.graphics.stanford.edu/~niessner/papers/2016/1facetoface/thies2016face.pdf

    Face2Face - Real time Face Capture and Reenactment of RGB Videos: https://www.youtube.com/watch?v=s1DPhc9HNQ0

    github: https://github.com/izyware/izy-idman-tools

    npmjs: https://www.npmjs.com/package/izy-idman-tools

    vnc-rfb-client: https://github.com/filipecbmoc/vnc-rfb-client

    valence: https://github.com/simmons/valence

    understanding-trrs-and-audio-jacks: https://www.cablechick.com.au/blog/understanding-trrs-and-audio-jacks/