Developing Qt5 applications natively on Tempemail Linux- Tempemail – Blog – 10 minute

By Nathan Hartman

Introduction
Tempemail Linux provides the technologies essential to building a flexible, stable, and secure platform for your embedded devices.
Based on OpenEmbedded releases from the Yocto Project, it is designed to let you customize your platform to include only the packages and features you need. Powered by bitbake, it provides the ability to build an entire Linux distribution from source by following repeatable recipes. This is really powerful, but can be foreign to application developers that already have a workflow they are comfortable with.
Developers building graphical user interfaces (GUI) have their own set of tools that they rely on. Often they prefer to use an Integrated Development Environment (IDE) tailored to the language and frameworks they are working with. Typically this IDE and the tools it uses are running natively on the same platform they are building for.
Fortunately, these developers can still do this on Tempemail Linux. This tutorial describes building Tempemail Linux with the GCC toolchain and Qt Creator included to enable native application development.
Requirements
Building the entire platform has a few simple requirements to get started. Many Linux distributions are self-hosted meaning you can only build the next version of the distribution with the previous release.
Tempemail Linux supports a wide variety of hosts. The official supported list of hosts is below, but many newer releases have been tested and known to work.
Supported Distribution for Tempemail Linux LTS 19:

CentOS 7.6
Fedora 30
openSUSE Leap 15
Red Hat Enterprise Linux 7.6 and 8.0
SUSE Linux Enterprise Desktop 15
Ubuntu Desktop 16.04 and 18.04 LTS

For details on necessary Linux Host System Libraries and Executables please refer to the documentation.
For example, on Ubuntu systems the following packages must be installed:
$ sudo apt install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping libsdl1.2-dev xterm file git bmap-tools coreutils parted e2fsprogs

In addition, the deployment steps require root or sudo access for deploying to an SD Card or USB flash device.
Lastly, this tutorial has been validated for the Raspberry Pi 4 and Intel NUC devices (NUC5i3MYBE, NUC6i7KYK, NUC7i5DNK). These instructions should work on other devices, however we tested these devices specifically to ensure that the hardware acceleration was enabled for top performance.
Cloning the Tempemail Linux repository
The first step is to clone the Tempemail Linux GitHub repository.

Create a directory for the tools needed to create the Tempemail Linux image. This will be referred to as the parent directory from this point onwards:
$ mkdir wrlinux_qt
$ cd wrlinux_qt

In a Linux terminal, clone the repository into your build folder with the following command:
$ git clone https://github.com/WindRiver-Labs/wrlinux-x.git

Note: A clone of wrlinux-x defaults to the WRLINUX_10_19_BASE branch with the latest update tagged. WRLINUX_10_19_BASE_UPDATE0003 or greater is required for the Raspberry Pi 4 BSP. This tutorial was written using WRLINUX_10_19_BASE_UPDATE0007.

Configure the build for your device
This section describes the usage of the Tempemail setup.sh tool for easy configuration of the build. We will use it to specify the target board, download the layers required, and pre-populate the configuration file.

In the parent directory that wrlinux-x was cloned into (wrlinux_qt), run the setup.sh script. Accept the End User License Agreement (EULA).
For Raspberry Pi 4 use:
$ ./wrlinux-x/setup.sh –machine bcm-2xxx-rpi4 –dl-layers

For an Intel NUC use:
$ ./wrlinux-x/setup.sh –machine intel-x86-64 –dl-layers

Note: The –machine flag specifies that it should include the board support package for your device, the –dl-layers flag downloads the package source now instead of later at build time.
After some time, you will see:
Fetching projects: 100% (16/16), done.
Syncing work tree: 100% (16/16), done.

At which point the following files and directories should have been generated:
$ ls -al
total 64
drwxr-xr-x 8 nhartman users 4096 May 25 16:53 .
drwxr-xr-x 3 nhartman users 4096 May 25 15:44 ..
drwxr-xr-x 5 nhartman users 4096 May 25 16:53 bin
lrwxrwxrwx 1 nhartman users 22 May 25 16:53 bitbake -> layers/oe-core/bitbake
drwxr-xr-x 5 nhartman users 4096 May 25 16:42 config
-rw-r–r– 1 nhartman users 2279 May 25 16:42 default.xml
lrwxrwxrwx 1 nhartman users 89 May 25 16:42 environment-setup-x86_64-wrlinuxsdk-linux -> /home/nhartman/wrlinux_qt/bin/buildtools/environment-setup-x86_64-wrlinuxsdk-linux
drwxr-xr-x 8 nhartman users 4096 May 25 16:53 .git
-rw-r–r– 1 nhartman users 111 May 25 16:42 .gitconfig
-rw-r–r– 1 nhartman users 147 May 25 16:42 .gitignore
-rw-r–r– 1 nhartman users 61 May 25 16:42 .gitmodules
drwxr-xr-x 16 nhartman users 4096 May 25 16:53 layers
lrwxrwxrwx 1 nhartman users 19 May 25 16:53 meta -> layers/oe-core/meta
lrwxrwxrwx 1 nhartman users 32 May 25 16:53 oe-init-build-env -> layers/oe-core/oe-init-build-env
-rw-r–r– 1 nhartman users 2882 May 25 16:42 README
drwxr-xr-x 7 nhartman users 4096 May 25 16:53 .repo
-rw-r–r– 1 nhartman users 205 May 25 16:42 .repo_.gitconfig.json
lrwxrwxrwx 1 nhartman users 22 May 25 16:53 scripts -> layers/oe-core/scripts
-rw-r–r– 1 nhartman users 73 May 25 16:42 .templateconf
drwxr-xr-x 5 nhartman users 4096 May 25 13:53 wrlinux-x

Run the environment setup scripts that were generated in the parent directory. They will create and change to the build sub-directory.
$ . ./environment-setup-x86_64-wrlinuxsdk-linux
$ . ./oe-init-build-env

You had no conf/local.conf file. This configuration file has therefore been
created for you with some default values. You may wish to edit it to, for
example, select a different MACHINE (target hardware). See conf/local.conf
for more information as common configuration options are commented.

You had no conf/bblayers.conf file. This configuration file has therefore been
created for you with some default values. To add additional metadata layers
into your configuration please add entries to conf/bblayers.conf.

The Yocto Project has extensive documentation about OE including a reference
manual which can be found at:
http://yoctoproject.org/documentation

For more information about OpenEmbedded see their website:
http://www.openembedded.org/

This project was configured with the following options:
–machine bcm-2xxx-rpi4 –dl-layers

Common Tempemail images are:
wrlinux-image-small (suggests distro: wrlinux and feature/busybox)
wrlinux-image-core (suggests distro: wrlinux)
wrlinux-image-std (suggests distro: wrlinux)
wrlinux-image-std-sato (requires distro: wrlinux-graphics)

Common Yocto Project images, typically built with distro poky, are:
core-image-minimal
core-image-base
core-image-sato

You can also run generated qemu images with a command like ‘runqemu qemux86-64’

These scripts will set environment variables for the build tool as well as generate some pre-built configuration files.
Note: If you have previously built an image, running these scripts will not overwrite your existing configurations. Rename, move or delete previous configuration files to ensure the correct configuration files are generated.

Patching the project directories
This section describes how to add the required template files using git.

Clone the meta-qt5 and the meta-qt5-extra repositories in a my-layers directory
$ mkdir my-layers
$ git clone -b zeus https://github.com/meta-qt5/meta-qt5.git my-layers/meta-qt5
$ git clone -b zeus https://github.com/schnitzeltony/meta-qt5-extra.git my-layers/meta-qt5-extra

Download the required patches listed below into the wrlinux_qt/build directory:
Credit goes to my colleague, Quanyang Wang, for creating the patches to integrate LxQt desktop on Tempemail Linux.

In the meta-qt5-extra directory, apply the first patch:
$ cd my-layers/meta-qt5-extra
$ git am 0001-polkit-qt-1-fix-compile-error.patch
$ cd $BUILDDIR

In the wrlinux layer directory, apply the wrlinux patches:
$ cd ../layers/wrlinux
$ git am 0001-wrlinux-template-add-template-qt5-for-wrlinux.patch
$ git am 0002-wrlinux-template-add-lxqt-support-for-wrlinux.patch
$ cd $BUILDDIR

This section describes how to clone the meta-qt5 and meta-qt5-extra layer repositories. In addition, how to add the layers, GCC toolchain and desktop environment to the image.

Using the bitbake-layers tool, add the layers to the conf/bblayers.conf file. This allow bitbake to locate the custom layer when building the image. In addition, if using a Raspberry Pi add the Raspberry Pi Graphics layer to enable hardware acceleration.
$ bitbake-layers add-layer my-layers/meta-qt5
$ bitbake-layers add-layer my-layers/meta-qt5-extra

If building for the Raspberry Pi, add in addition for hardware acceleration:
$ bitbake-layers add-layer ../layers/bcm-2xxx-rpi/rpi-graphics/

Edit conf/local.conf configuration file to add the GCC toolchain, packages required for Qt5 and the desktop environment. Append the following lines to the end of conf/local.conf:
BB_NO_NETWORK = “0”
BB_NUMBER_THREADS = “16”
PARALLEL_MAKE = “-j 16”
WRTEMPLATE = “feature/qt5 feature/lxqt”
IMAGE_INSTALL_append += ”
packagegroup-core-buildessential
xserver-xorg
xserver-xorg-extension-glx
mesa
mesa-demos
openssh
git”
DISTRO_FEATURES_append += ” x11 opengl polkit”

Note: this tutorial uses the LxQt desktop, but you may replace feature/lxqt with feature/xfce if you prefer to use the desktop featured with the Raspberry Pi Foundation images.
If building for the Raspberry Pi, also add the following to enable hardware acceleration:
LICENSE_FLAGS_WHITELIST = “commercial”

If building for Raspberry Pi, edit ../layers/bcm-2xxx-rpi/recipes-bsp/boot-config/boot-config/cmdline.txt file to adjust the kernel parameters.
$ cat ../layers/bcm-2xxx-rpi/recipes-bsp/boot-config/boot-config/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait ip=dhcp
$ echo ‘dwc_otg.lpm_enable=0 console=tty root=/dev/mmcblk0p2 rootfstype=ext4 rootwait’ > ../layers/bcm-2xxx-rpi/recipes-bsp/boot-config/boot-config/cmdline.txt

Note: These changes ensure that the console output appears on the HDMI display and that the boot sequence doesn’t wait for a DHCP connection.

Building the image
This section describes building the Tempemail Linux image

Build the image of your choice. As seen in the image in step 4, there are several suggested images. We will build wrlinux-image-std-sato, an image optimized for a desktop environment.
$ bitbake wrlinux-image-std-sato
Processing Tempemail template files…
Parsing recipes: 2% |#####

After some time, you will see the following when the build is finished:
Initialising tasks: 100% |######################################################################| Time: 0:00:07
Sstate summary: Wanted 3536 Found 0 Missed 3536 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks
NOTE: Setscene tasks completed

NOTE: Tasks Summary: Attempted 4643 tasks of which 0 didn’t need to be rerun and all succeeded.

Identifying your USB device
This section describes how to identify your USB SD Card adapter or USB flash drive using fdisk.

Use the fdisk command to list the block devices detected by Linux:
$ sudo fdisk -l

Identify your device through the model name or capacity.
Disk /dev/sdx: 7.43 GiB, 7969177600 bytes, 15564800 sectors
Disk model: SD Card Reader
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x70e121a4

Device Boot Start End Sectors Size Id Type
/dev/sdx1 * 8192 532479 524288 256M c W95 FAT32 (LBA)
/dev/sdx2 532480 3133739 2601260 1.2G 83 Linux

Note: In this case the device is ‘/dev/sdx’ as seen by the capacity and ‘Disk model’. The device name should take the format of ‘/dev/sdx’ where x is a letter specific to your machine.

Flashing the image
This section describes how to write the generated .wic and .wic.bmap or .iso files to the SD card or USB flash drive.
For the Raspberry Pi 4:

Locate the images generated by bitbake. Relative to the build directory, the image path is:
tmp-glibc/deploy/images/bcm-2xxx-rpi4

Specifically, we need the wrlinux-image-std-sato-bcm-2xxx-rpi4.wic and wrlinux-image-std-sato-bcm-2xxx-rpi4.wic.bmap files.

Using bmaptool, flash the generated files to the usb device
$ sudo bmaptool copy –bmap /IMAGE_NAME-bcm-2xxx-rpi4.wic.bmap /IMAGE_NAME-bcm-2xxx-rpi4.wic /dev/sdx

After up to several minutes (depending on the speed of your USB device) you should see:
$ sudo bmaptool copy –bmap wrlinux-image-std-bcm-2xxx-rpi4.wic.bmap wrlinux-image-std-bcm-2xxx-rpi4.wic /dev/sdx
[sudo] password for nhartman:
bmaptool: info: block map format version 2.0
bmaptool: info: 391718 blocks of size 4096 (1.5 GiB), mapped 247338 blocks (996.2 MiB or 63.1%)
bmaptool: info: copying image ‘wrlinux-image-std-bcm-2xxx-rpi4.wic’ to block device ‘/dev/sdx using bmap file ‘wrlinux-image-std-bcm-2xxx-rpi4.wic.bmap’
bmaptool: info: 100% copied
bmaptool: info: synchronizing ‘/dev/sdx’
bmaptool: info: copying time: 43.7s, copying speed 22.1 MiB/sec

For the Intel NUC:

Locate the images generated by bitbake. Relative to the build directory, the image path is:
tmp-glibc/deploy/images/intel-x86-64

Specifically, we need the wrlinux-image-std-sato-intel-x86-64.iso file.

Flash your image using ‘dd’
$ sudo dd if=PATH_TO_IMAGE/wrlinux-image-std-sato-intel-x86-64.hddimg of=/dev/sdx status=progress && sync

In some cases, dd can appear to hang while writing from memory to the USB. Check the progress with:
$ sudo cat /proc/meminfo | grep Dirty

It should approach the low hundreds when the write is finished.

Resizing the root partition and filesystem
This section describes how to resize the root filesystem to take up the full capacity of the SD card. With the USB device inserted, run the following commands, replacing ‘/dev/sdx’ with your device.

Resize the second partition to fill 100% of the storage device.
$ sudo parted /dev/sdx resizepart 2 100%
Information: You may need to update /etc/fstab.

Run the EXT2/3/4 filesystem check tool on the second partition to fix any potential problems.
$ sudo e2fsck -f /dev/sdx2
e2fsck 1.45.3 (14-Jul-2019)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
root: 52648/1796640 files (0.4% non-contiguous), 662799/3723264 blocks

Resize the EXT4 filesystem with resize2fs to expand it and fill the entire partition.
$ sudo resize2fs /dev/sdx2
resize2fs 1.45.3 (14-Jul-2019)
Resizing the filesystem on /dev/sdx2 to 373264 (4k) blocks.
The filesystem on /dev/sdx2 is now 3723264 (4k) blocks long.

Results
For the LxQt desktop, the login and password are ‘wrluser’ and the $HOME directory is /home/wrluser.
After logging in you will be presented with the LxQt Desktop:

Note: If you chose to use the the XFCE desktop, the login is ‘root’ with no password and the $HOME directory is /root.
Before running the examples
The sample applications come from Qt git repositories. Cloning the repositories requires that you have a working network connection. If for any reason your device didn’t automatically get a dynamic IP address you may use the following commands to obtain one.

Open QTerminal by clicking on the icon in the bottom left corner, then click System Tools > QTerminal.

In a QTerminal, execute the su command to become the root user.

Then execute the “ip a” command to verify that your device has retrieved an IP address. Refer to the inet line under eth0 to see your IP address.

If you do not already have an IP address, then you may execute dhclient eth0 to request a dynamic IP address.

Execute exit to stop running commands as the root user and become wrluser again.

Sample Application: glxgears
This section demonstrates the mesa-demos glxgears application.

Open QTerminal by clicking on the icon in the bottom left corner, then click System Tools > QTerminal.

Execute glxgears to try the OpenGL example. If the hardware acceleration is working, it should report around 60 frames per second on the Raspberry Pi 4.

Sample Application: OpenGLwindow
This section demonstrates an OpenGL example from the Qtbase repository.

Use git to clone the repository containing the OpenGL examples
$ git clone –depth 1 -b dev git://code.qt.io/qt/qtbase.git
Cloning into ‘qtbase’…
remote: Counting objects: 23735, done.
remote: Compressing objects: 100% (18386/18386), done.
remote: Total 23735 (delta 5511), reused 16062 (delta 3270)
Receiving objects: 100% (23735/23735), 63.00 MiB | 3.92 MiB/s, done.
Resolving deltas: 100% (5511/5511), done.
Updating files: 100% (23031/23031), done.

Copy the examples folder out of the qtbase directory so that Qt Creator will let us build the project.
$ cp -r qtbase/examples $HOME

Launch Qt Creator from the GUI

Open the openglwindow.pro file by selecting File > Open File or Project from the menu.

Navigate to $HOME/examples/opengl/openglwindow/openglwindow.pro. Then click, Open.

Select “openglwindow“ as the Active Project, then click the Configure Project button.

Build and run the application by clicking Build > Run in the menu.

(Optional) You may click the “Compile Output” tab along the bottom to watch the toolchain output as the project builds.

After a few moments the openglwindow should appear with a spinning rainbow triangle.

Sample Application: QtCluster
This section demonstrates the QtCluster example from the Qtbase docs repository.

Open a terminal and clone the repository containing the Qt docs:
git clone git://code.qt.io/qt/qtdoc.git –branch 5.10
Cloning into ‘qtdoc’…
remote: Counting objects: 24976, done.
remote: Comrpessing objects: 100% (12628/12628), done.
remote: Total 24976 (delta 17217), reused 17635 (delta 11944)
Receiving objects: 100% (24976/24976), 42.44 MiB | 6.23 MiB/s, done.
Resolving deltas: 100% (17217/17217), done.

Launch Qt Creator from the GUI

Open the qtcluster-base.pro file by selecting File > Open File or Project from the menu.

Navigate to $HOME/qtdoc/doc/src/snippets/qtcluster/qtcluster-base.pro. Then click, Open.

Select qtcluster-base“* as the Active Project, then click the Configure Project button.

Build and run the application by clicking Build > Run in the menu.

Note: If the hardware acceleration is working, it should report around 60 frames per second on the Raspberry Pi 4.

Conclusions
Today’s embedded devices are more powerful than ever before and capable of displaying beautiful graphical user interfaces. This allows GUI developers to work directly on the device in a way that may not have been possible before. By providing the tools on the embedded device with drivers for enabling hardware acceleration it helps make it easier to get started and accelerate development.
References
All product names, logos, and brands are property of their respective owners.All company, product and service names used in this software are for identification purposes only. Tempemail are registered trademarks of Tempemail Systems.
Disclaimer of Warranty / No Support: Tempemail does not provide support and maintenance services for this software, under Tempemail’s standard Software Support and Maintenance Agreement or otherwise. Unless required by applicable law, Tempemail provides the software (and each contributor provides its contribution) on an “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, either express or implied, including, without limitation, any warranties of TITLE, NONINFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the software and assume ay risks associated with your exercise of permissions under the license.
Qt is a registered trademark of Silicon Graphics, Inc. in the United States and other countries.
OpenGL is a registered trademark of The Qt Company Ltd in the United States and other countries.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

NVIDIA container runtime for Tempemail Linux- Tempemail – Blog – 10 minute

By Pablo Rodriguez Quesada

Introduction
Training and using AI models are tasks that demand significant computational power. Current trends are pointing more to deep neural networks, which include thousands, if not millions of operations per iteration. In the past year, more and more researchers have sounded the alarm on the exploding costs of deep learning. The computing power needed to do AI is now rising seven times faster than ever before [1]. These new needs are making hardware companies create hardware accelerators like Neural processing units, CPUs, and GPUs.
Embedded systems are not an exception to this transformation. We see every day intelligent traffic lights, autonomous vehicles, intelligent IoT devices, and more. The current direction is to have accelerators inside these embedded devices, Systems On-Chip mainly. Hardware developers have embedded small accelerators like GPUs, FPGAs, and more into SOCs, SOMs, and other systems. We call these modern systems: heterogeneous computing architectures.
The use of GPUs on Linux is not something new; we have been able to do so for many years. However, it would be great to accelerate the development and deployment of HPC applications. Containers enable portability, stability, and many other characteristics when deploying an application. For this reason, companies are investing so much in these technologies. For instance, NVIDIA recently started a project that enables CUDA on Docker [2].
One concern when dealing with containers is the loss of performance. However, when comparing the performance of the GPU with and without the containers environment, researchers found that no additional overhead is caused [3]. The consistency in the performance is one of the principal benefits of containers over virtual machines; accessing the GPU is done seamlessly as the kernel stays the constant.
NVIDIA-Docker on Yocto
Together with Matt Madison (Maintainer of meta-tegra layer), we created the required recipes to build and deploy NVIDIA-docker on Tempemail Linux LTS 19 (Yocto 3.0 Zeus).[4]
In this tutorial, you will find how to enable NVIDIA-containers on a custom distribution of Linux and run a small test application that leverages the use of GPUs inside a container.
Description
To enable NVIDIA containers, Docker needs to have the nvidia-containers-runtime which is a modified version of runc that adds a custom pre-start hook to all containers. The nvidia-containers-runtime communicates docker using the library libnvidia-container, which automatically configures GNU/Linux containers leveraging NVIDIA hardware. This library relies on kernel primitives and is designed to be agnostic of the container runtime. All the effort to port these libraries and tools to the Yocto Project was submitted to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
Note: this setup is based on Linux for Tegra and not the original Yocto Linux kernel
Benefits, and Limitations
The main benefit of GPUs inside containers is the portability and stability in the environment at the time of deployment. Of course, the development also sees benefits in having this portable environment as developers can collaborate more efficiently.
However, there are limitations due to the nature of the NVIDIA environment. Containers are heavy-weight because they are based in Linux4Tegra image that contains libraries required on runtime. On the other hand, because of redistribution limitations, some libraries are not included in the container. This requires runc to mount some property code libraries, losing portability in the process.
Prerequisites
You are required to download NVIDIA property code from their website. To do so, you will need to create an NVIDIA Developer Network account.
Go into https://developer.nvidia.com/embedded/downloads , download the NVIDIA SDK Manager, install it and download all the files for the Jetson board you own. All the effort to port these libraries and tools to the Yocto Project was submited to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
The required Jetpack version is 4.3
/opt/nvidia/sdkmanager/sdkmanager

Image 1. SDK Manager installation
If you need to include TensorRT in your builds, you must create the subdirectory and move all of the TensorRT packages downloaded by the SDK Manager there.
$ mkdir /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA
$ cp /home/$USER/Downloads/nvidia/sdkm_downloads/libnv* /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA

Creating the project
$ git clone –branch WRLINUX_10_19_BASE https://github.com/WindRiver-Labs/wrlinux-x.git
$ ./wrlinux-x/setup.sh –all-layers –dl-layers –templates feature/docker

Note: –distro wrlinux-graphics can be used for some applications that require x11.
Add meta-tegra layer
DISCAIMER: meta-tegra is a community maintained layer not supported by Tempemail at the time of writing
$ git clone https://github.com/madisongh/meta-tegra.git layers/meta-tegra
$ cd layers/meta-tegra
$ git checkout 11a02d02a7098350638d7bf3a6c1a3946d3432fd
$ cd –

Tested with: https://github.com/madisongh/meta-tegra/commit/11a02d02a7098350638d7bf3a6c1a3946d3432fd
. $ ./environment-setup-x86_64-wrlinuxsdk-linux
. $ ./oe-init-build-env

$ bitbake-layers add-layer ../layers/meta-tegra/
$ bitbake-layers add-layer ../layers/meta-tegra/contrib

Configure the project
$ echo “BB_NO_NETWORK = ‘0’” >> conf/local.conf
$ echo ‘INHERIT_DISTRO_remove = “whitelist”‘ >> conf/local.conf

Set the machine to your Jetson Board
$ echo “MACHINE=’jetson-nano-qspi-sd'” >> conf/local.conf
$ echo “PREFERRED_PROVIDER_virtual/kernel = ‘linux-tegra'” >> conf/local.conf

CUDA cannot be compiled with GCC versions higher than 7. Set GCC version to 7.%:
$ echo ‘GCCVERSION = “7.%”‘ >> conf/local.conf
$ echo “require contrib/conf/include/gcc-compat.conf” >> conf/local.conf

Set the IMAGE export type to tegraflash for ease of deployment.
$ echo ‘IMAGE_CLASSES += “image_types_tegra”‘ >> conf/local.conf
$ echo ‘IMAGE_FSTYPES = “tegraflash”‘ >> conf/local.conf

Change the docker version, add nvidia-container-runtime.
$ echo ‘IMAGE_INSTALL_remove = “docker”‘ >> conf/local.conf
$ echo ‘IMAGE_INSTALL_append = ” docker-ce”‘ >> conf/local.conf

Fix tini build error
$ echo ‘SECURITY_CFLAGS_pn-tini_append = ” ${SECURITY_NOPIE_CFLAGS}”‘ >> conf/local.conf

Set NVIDIA download location
$ echo “NVIDIA_DEVNET_MIRROR=’file:///home/$USER/Downloads/nvidia/sdkm_downloads'” >> conf/local.conf
$ echo ‘CUDA_BINARIES_NATIVE = “cuda-binaries-ubuntu1604-native”‘ >> conf/local.conf

Add the Nvidia containers runtime, AI libraries and the AI libraries CSV files
$ echo ‘IMAGE_INSTALL_append = ” nvidia-docker nvidia-container-runtime cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-container-csv cudnn-container-csv tensorrt-container-csv libvisionworks-container-csv libvisionworks-sfm-container-csv libvisionworks-tracking-container-csv”‘ >> conf/local.conf

Enable ldconfig required by the nvidia-container-runtime
$ echo ‘DISTRO_FEATURES_append = ” ldconfig”‘ >> conf/local.conf

Build the project
$ bitbake wrlinux-image-glibc-std

Burn the image into the SD card
$ unzip wrlinux-image-glibc-std-sato-jetson-nano-qspi-sd-20200226004915.tegraflash.zip -d wrlinux-jetson-nano
$ cd wrlinux-jetson-nano

Connect the Jetson Board to your computer using the micro USB cable as shown in the image:
Image 2. Recovery mode setup for Jetson Nano
Image 3. Pins Diagram for Jetson Nano
After connecting the board, run:
$ sudo ./dosdcard.sh

This command will create the file wrlinux-image-glibc-std.sdcard that contains the SD card image required to boot.
Burn the Image to the SD Card:
$ sudo dd if=wrlinux-image-glibc-std.sdcard of=/dev/***** bs=8k

Warning: substitute the of= device to the one that points to your sdcardFailure to do so can lead to unexpected erase of hard disks
Deploy the target
Boot up the board and find the ip address with the command ifconfig.
Then, ssh into the machine and run docker:
$ ssh [email protected]

Create tensorflow_demo.py using the example from the “Train and evaluate with Keras” section in the Tensorflow documentation:
#!/usr/bin/python3
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

inputs = keras.Input(shape=(784,), name=’digits’)
x = layers.Dense(64, activation=’relu’, name=’dense_1′)(inputs)
x = layers.Dense(64, activation=’relu’, name=’dense_2′)(x)
outputs = layers.Dense(10, name=’predictions’)(x)

model = keras.Model(inputs=inputs, outputs=outputs)

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(60000, 784).astype(‘float32’) / 255
x_test = x_test.reshape(10000, 784).astype(‘float32’) / 255

y_train = y_train.astype(‘float32’)
y_test = y_test.astype(‘float32’)

# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]

model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# List of metrics to monitor
metrics=[‘sparse_categorical_accuracy’])

print(‘# Fit model on training data’)
history = model.fit(x_train, y_train,
batch_size=64,
epochs=3,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val))

print(‘nhistory dict:’, history.history)

# Evaluate the model on the test data using `evaluate`
print(‘n# Evaluate on test data’)
results = model.evaluate(x_test, y_test, batch_size=128)
print(‘test loss, test acc:’, results)

# Generate predictions (probabilities — the output of the last layer)
# on new data using `predict`
print(‘n# Generate predictions for 3 samples’)
predictions = model.predict(x_test[:3])
print(‘predictions shape:’, predictions.shape)

Create a Dockerfile:
FROM tianxiang84/l4t-base:all

WORKDIR /root
COPY tensorflow_demo.py .

ENTRYPOINT [“/usr/bin/python3”]
CMD [“/root/tensorflow_demo.py”]

Build the container:
# docker build -t l4t-tensorflow .

Run the container:
# docker run –runtime nvidia -it l4t-tensorflow

Results
Note the use of the GPU0:
2020-04-22 21:13:56.969319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-04-22 21:13:58.210600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 268 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)

Conclusions
The use of NVIDIA-containers allows a smooth deployment of AI applications. Once you have your Linux distribution running containers with the custom NVIDIA runtime, getting a Neural Network to work is as simple as running one command. Getting an NVIDIA Tegra board to run computing-intensive workloads is now easier than ever.
With the provided custom runc engine that allows the use of CUDA and other related libraries, you will be running applications as if they were on bare-metal.
One of the possibilities the containers offer is combining this setup with Kubernetes or the NVIDIA EGX Platform so that you can do the orchestration. The Kubernetes Device Plugins distribute and manage workloads across multiple acceleration devices, giving you high availability as well as other benefits. Combined with other technologies such as Tensorflow and OpenCV, and you will have an army of edge devices ready to run your Intelligent applications for you.
References

[1] K. Hao, “The computing power needed to train AI is now rising seven times faster than ever before”, MIT Technology Review, Nov. 2019. [Online]. Available: https://www.technologyreview.com/s/614700/the- computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before.
[2] Nvidia, nvidia-docker, [Online; accessed 15. Mar. 2020], Feb. 2020. [Online]. Available:https://github.com/NVIDIA/nvidia-docker.
[3] L. Benedicic and M. Gila, “Accessing gpus from containers in hpc”, 2016. [Online]. Available: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post187s2-file3.pdf.
[4] M. Madison, Container runtime for master, [Online; accessed 30. Mar. 2020], Mar. 2020. [Online]. Available:https://github.com/madisongh/meta-tegra/pull/266

All product names, logos, and brands are property of their respective owners.All company, product and service names used in this software are for identification purposes only. Tempemail are registered trademarks of Tempemail Systems.
Disclaimer of Warranty / No Support: Tempemail does not provide support and maintenance services for this software, under Tempemail’s standard Software Support and Maintenance Agreement or otherwise. Unless required by applicable law, Tempemail provides the software (and each contributor provides its contribution) on an “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, either express or implied, including, without limitation, any warranties of TITLE, NONINFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the software and assume ay risks associated with your exercise of permissions under the license.
TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
Docker is a trademark of Docker, Inc.
NVIDIA, NVIDIA EGX, CUDA, Jetson, and Tegra are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

NVIDIA container runtime for Tempemail Linux- Tempemail – Blog – 10 minute

By Pablo Rodriguez Quesada

Introduction
Training and using AI models are tasks that demand significant computational power. Current trends are pointing more to deep neural networks, which include thousands, if not millions of operations per iteration. In the past year, more and more researchers have sounded the alarm on the exploding costs of deep learning. The computing power needed to do AI is now rising seven times faster than ever before [1]. These new needs are making hardware companies create hardware accelerators like Neural processing units, CPUs, and GPUs.
Embedded systems are not an exception to this transformation. We see every day intelligent traffic lights, autonomous vehicles, intelligent IoT devices, and more. The current direction is to have accelerators inside these embedded devices, Systems On-Chip mainly. Hardware developers have embedded small accelerators like GPUs, FPGAs, and more into SOCs, SOMs, and other systems. We call these modern systems: heterogeneous computing architectures.
The use of GPUs on Linux is not something new; we have been able to do so for many years. However, it would be great to accelerate the development and deployment of HPC applications. Containers enable portability, stability, and many other characteristics when deploying an application. For this reason, companies are investing so much in these technologies. For instance, NVIDIA recently started a project that enables CUDA on Docker [2].
One concern when dealing with containers is the loss of performance. However, when comparing the performance of the GPU with and without the containers environment, researchers found that no additional overhead is caused [3]. The consistency in the performance is one of the principal benefits of containers over virtual machines; accessing the GPU is done seamlessly as the kernel stays the constant.
NVIDIA-Docker on Yocto
Together with Matt Madison (Maintainer of meta-tegra layer), we created the required recipes to build and deploy NVIDIA-docker on Tempemail Linux LTS 19 (Yocto 3.0 Zeus).[4]
In this tutorial, you will find how to enable NVIDIA-containers on a custom distribution of Linux and run a small test application that leverages the use of GPUs inside a container.
Description
To enable NVIDIA containers, Docker needs to have the nvidia-containers-runtime which is a modified version of runc that adds a custom pre-start hook to all containers. The nvidia-containers-runtime communicates docker using the library libnvidia-container, which automatically configures GNU/Linux containers leveraging NVIDIA hardware. This library relies on kernel primitives and is designed to be agnostic of the container runtime. All the effort to port these libraries and tools to the Yocto Project was submitted to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
Note: this setup is based on Linux for Tegra and not the original Yocto Linux kernel
Benefits, and Limitations
The main benefit of GPUs inside containers is the portability and stability in the environment at the time of deployment. Of course, the development also sees benefits in having this portable environment as developers can collaborate more efficiently.
However, there are limitations due to the nature of the NVIDIA environment. Containers are heavy-weight because they are based in Linux4Tegra image that contains libraries required on runtime. On the other hand, because of redistribution limitations, some libraries are not included in the container. This requires runc to mount some property code libraries, losing portability in the process.
Prerequisites
You are required to download NVIDIA property code from their website. To do so, you will need to create an NVIDIA Developer Network account.
Go into https://developer.nvidia.com/embedded/downloads , download the NVIDIA SDK Manager, install it and download all the files for the Jetson board you own. All the effort to port these libraries and tools to the Yocto Project was submited to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
The required Jetpack version is 4.3
/opt/nvidia/sdkmanager/sdkmanager

Image 1. SDK Manager installation
If you need to include TensorRT in your builds, you must create the subdirectory and move all of the TensorRT packages downloaded by the SDK Manager there.
$ mkdir /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA
$ cp /home/$USER/Downloads/nvidia/sdkm_downloads/libnv* /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA

Creating the project
$ git clone –branch WRLINUX_10_19_BASE https://github.com/WindRiver-Labs/wrlinux-x.git
$ ./wrlinux-x/setup.sh –all-layers –dl-layers –templates feature/docker

Note: –distro wrlinux-graphics can be used for some applications that require x11.
Add meta-tegra layer
DISCAIMER: meta-tegra is a community maintained layer not supported by Tempemail at the time of writing
$ git clone https://github.com/madisongh/meta-tegra.git layers/meta-tegra
$ cd layers/meta-tegra
$ git checkout 11a02d02a7098350638d7bf3a6c1a3946d3432fd
$ cd –

Tested with: https://github.com/madisongh/meta-tegra/commit/11a02d02a7098350638d7bf3a6c1a3946d3432fd
. $ ./environment-setup-x86_64-wrlinuxsdk-linux
. $ ./oe-init-build-env

$ bitbake-layers add-layer ../layers/meta-tegra/
$ bitbake-layers add-layer ../layers/meta-tegra/contrib

Configure the project
$ echo “BB_NO_NETWORK = ‘0’” >> conf/local.conf
$ echo ‘INHERIT_DISTRO_remove = “whitelist”‘ >> conf/local.conf

Set the machine to your Jetson Board
$ echo “MACHINE=’jetson-nano-qspi-sd'” >> conf/local.conf
$ echo “PREFERRED_PROVIDER_virtual/kernel = ‘linux-tegra'” >> conf/local.conf

CUDA cannot be compiled with GCC versions higher than 7. Set GCC version to 7.%:
$ echo ‘GCCVERSION = “7.%”‘ >> conf/local.conf
$ echo “require contrib/conf/include/gcc-compat.conf” >> conf/local.conf

Set the IMAGE export type to tegraflash for ease of deployment.
$ echo ‘IMAGE_CLASSES += “image_types_tegra”‘ >> conf/local.conf
$ echo ‘IMAGE_FSTYPES = “tegraflash”‘ >> conf/local.conf

Change the docker version, add nvidia-container-runtime.
$ echo ‘IMAGE_INSTALL_remove = “docker”‘ >> conf/local.conf
$ echo ‘IMAGE_INSTALL_append = ” docker-ce”‘ >> conf/local.conf

Fix tini build error
$ echo ‘SECURITY_CFLAGS_pn-tini_append = ” ${SECURITY_NOPIE_CFLAGS}”‘ >> conf/local.conf

Set NVIDIA download location
$ echo “NVIDIA_DEVNET_MIRROR=’file:///home/$USER/Downloads/nvidia/sdkm_downloads'” >> conf/local.conf
$ echo ‘CUDA_BINARIES_NATIVE = “cuda-binaries-ubuntu1604-native”‘ >> conf/local.conf

Add the Nvidia containers runtime, AI libraries and the AI libraries CSV files
$ echo ‘IMAGE_INSTALL_append = ” nvidia-docker nvidia-container-runtime cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-container-csv cudnn-container-csv tensorrt-container-csv libvisionworks-container-csv libvisionworks-sfm-container-csv libvisionworks-tracking-container-csv”‘ >> conf/local.conf

Enable ldconfig required by the nvidia-container-runtime
$ echo ‘DISTRO_FEATURES_append = ” ldconfig”‘ >> conf/local.conf

Build the project
$ bitbake wrlinux-image-glibc-std

Burn the image into the SD card
$ unzip wrlinux-image-glibc-std-sato-jetson-nano-qspi-sd-20200226004915.tegraflash.zip -d wrlinux-jetson-nano
$ cd wrlinux-jetson-nano

Connect the Jetson Board to your computer using the micro USB cable as shown in the image:
Image 2. Recovery mode setup for Jetson Nano
Image 3. Pins Diagram for Jetson Nano
After connecting the board, run:
$ sudo ./dosdcard.sh

This command will create the file wrlinux-image-glibc-std.sdcard that contains the SD card image required to boot.
Burn the Image to the SD Card:
$ sudo dd if=wrlinux-image-glibc-std.sdcard of=/dev/***** bs=8k

Warning: substitute the of= device to the one that points to your sdcardFailure to do so can lead to unexpected erase of hard disks
Deploy the target
Boot up the board and find the ip address with the command ifconfig.
Then, ssh into the machine and run docker:
$ ssh [email protected]

Create tensorflow_demo.py using the example from the “Train and evaluate with Keras” section in the Tensorflow documentation:
#!/usr/bin/python3
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

inputs = keras.Input(shape=(784,), name=’digits’)
x = layers.Dense(64, activation=’relu’, name=’dense_1′)(inputs)
x = layers.Dense(64, activation=’relu’, name=’dense_2′)(x)
outputs = layers.Dense(10, name=’predictions’)(x)

model = keras.Model(inputs=inputs, outputs=outputs)

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(60000, 784).astype(‘float32’) / 255
x_test = x_test.reshape(10000, 784).astype(‘float32’) / 255

y_train = y_train.astype(‘float32’)
y_test = y_test.astype(‘float32’)

# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]

model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# List of metrics to monitor
metrics=[‘sparse_categorical_accuracy’])

print(‘# Fit model on training data’)
history = model.fit(x_train, y_train,
batch_size=64,
epochs=3,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val))

print(‘nhistory dict:’, history.history)

# Evaluate the model on the test data using `evaluate`
print(‘n# Evaluate on test data’)
results = model.evaluate(x_test, y_test, batch_size=128)
print(‘test loss, test acc:’, results)

# Generate predictions (probabilities — the output of the last layer)
# on new data using `predict`
print(‘n# Generate predictions for 3 samples’)
predictions = model.predict(x_test[:3])
print(‘predictions shape:’, predictions.shape)

Create a Dockerfile:
FROM tianxiang84/l4t-base:all

WORKDIR /root
COPY tensorflow_demo.py .

ENTRYPOINT [“/usr/bin/python3”]
CMD [“/root/tensorflow_demo.py”]

Build the container:
# docker build -t l4t-tensorflow .

Run the container:
# docker run –runtime nvidia -it l4t-tensorflow

Results
Note the use of the GPU0:
2020-04-22 21:13:56.969319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-04-22 21:13:58.210600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 268 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)

Conclusions
The use of NVIDIA-containers allows a smooth deployment of AI applications. Once you have your Linux distribution running containers with the custom NVIDIA runtime, getting a Neural Network to work is as simple as running one command. Getting an NVIDIA Tegra board to run computing-intensive workloads is now easier than ever.
With the provided custom runc engine that allows the use of CUDA and other related libraries, you will be running applications as if they were on bare-metal.
One of the possibilities the containers offer is combining this setup with Kubernetes or the NVIDIA EGX Platform so that you can do the orchestration. The Kubernetes Device Plugins distribute and manage workloads across multiple acceleration devices, giving you high availability as well as other benefits. Combined with other technologies such as Tensorflow and OpenCV, and you will have an army of edge devices ready to run your Intelligent applications for you.
References

[1] K. Hao, “The computing power needed to train AI is now rising seven times faster than ever before”, MIT Technology Review, Nov. 2019. [Online]. Available: https://www.technologyreview.com/s/614700/the- computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before.
[2] Nvidia, nvidia-docker, [Online; accessed 15. Mar. 2020], Feb. 2020. [Online]. Available:https://github.com/NVIDIA/nvidia-docker.
[3] L. Benedicic and M. Gila, “Accessing gpus from containers in hpc”, 2016. [Online]. Available: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post187s2-file3.pdf.
[4] M. Madison, Container runtime for master, [Online; accessed 30. Mar. 2020], Mar. 2020. [Online]. Available:https://github.com/madisongh/meta-tegra/pull/266

All product names, logos, and brands are property of their respective owners.All company, product and service names used in this software are for identification purposes only. Tempemail are registered trademarks of Tempemail Systems.
Disclaimer of Warranty / No Support: Tempemail does not provide support and maintenance services for this software, under Tempemail’s standard Software Support and Maintenance Agreement or otherwise. Unless required by applicable law, Tempemail provides the software (and each contributor provides its contribution) on an “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, either express or implied, including, without limitation, any warranties of TITLE, NONINFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the software and assume ay risks associated with your exercise of permissions under the license.
TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
Docker is a trademark of Docker, Inc.
NVIDIA, NVIDIA EGX, CUDA, Jetson, and Tegra are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Microsoft’s Linux embrace continues with DirectX-tend – Hardware – Software- Tempemail – Blog – 10 minute

Microsoft will bring its famed set of DirectX gaming application programming interfaces to the Windows Subsystem for Linux 2 (WSL 2) environment to provide hardware accelerated graphics for Linux applications.
With a slew of Windows graphics features soon being available on WSL 2, Microsoft is already talking about moving the Linux environment out of being text console mode only, with applications getting graphical user interfaces (GUIs) as well.
A new Linux dxgkrnl kernel driver was clean-room developed for the WSL DirectX support, based on Microsoft’s Windows Display Driver Model graphics processing unit para-virtualisation (GPU-PV) technology.
The driver will communicate with the Windows kernel and the physical graphics card; Microsoft said multiple GPUs are supported if they run WDDM 2.9 drivers.
Sharing the GPU between Windows and WSL 2 is dynamic and not subject to partitioning or resource limits with applications running at near native speed less some virtualisation overheads.
Microsoft said that its method of projecting a WDDM-compatible abstraction for the graphics card inside the Linux kernel allowed the company to recompile its entire DirectX API to WSL 2.
This means the full Direct 3D version 12 and DxCore APIs, with the caveat being that support is currently limited to GNU C library (glibc) distributions such as Fedora, Ubuntu, CentOS and others.
Graphics card vendors will need to provide user mode drivers (UMDs) for their hardware too.
The Linux dxgkrnl driver will also be used to provide support for non-DirectX hardware acceleration through the Khronos APIs such as OpenGL and OpenCL through the Mesa library, Microsoft said.
Vulkan is not yet supported, but Microsoft said it’s looking into how to integrate that particular Khronos API for Linux as well.
Over the years, Microsoft has extended the role of DirectX beyond gaming and graphics, and added support for machine learning and artificial intelligence training.
Its machine learning API, DirectML, has been ported and works on Linux when running WSL 2, Microsoft said.
Microsoft will also add support for the Nvidia CUDA API for WSL 2.
This means hardware acceleration for CUDA-X libraries like cuDNN, cuBLAS and TensorRT.
Support for CUDA will come with Nvidia’s WDDM v2.9 driver, and be automatically installed and work on glibc-based Linux distributions running on WSL 2.
Hardware acceleration for Nvidia’s Docker tools within WSL 2 will also be supported for containerised workloads, and available as an adddional package.
To try out DirectX on Linux and WSL 2, users need to join the Windows 10 Insider preview program and select the Fast ring.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Data61, Linux Foundation launch seL4 open source foundation – Strategy – Security – Software- Tempemail – Blog – 10 minute

The Linux Foundation is set to host a new global not-for-profit foundation established by the CSIRO’s Data61 to promote and fund the development of its security-focused microkernel, seL4.
The secure embedded L4 (seL4) microkernel was developed by Data61 to provide a reliable, secure, fast and verified base for building trustworthy operating systems that handle sensitive information. It has been deployed in defence and aerospace settings.
seL4 enforces security within componentised system architectures by ensuring isolation between trusted and untrusted system components, and by carefully controlling software access to hardware devices in the system.
The new seL4 Foundation will be chaired by one of the microkernel’s original developers, Scientia Professor Gernot Heiser from UNSW Sydney and Data61.
“This is about taking the seL4 ecosystem to the next level,” Heiser said.
“While broadening the community of contributors and adopters, we will continue to drive the kernel’s evolution and the research that ensures it will remain the world’s most advanced OS technology.”
The Linux Foundation said the work of the new seL4 foundation will be “paramount” in safeguarding avionics, autonomous vehicles, medical devices, and critical infrastructure from cyber attacks.
Michael Dolan, vice president of strategic programs at the Linux Foundation, said it will support seL4’s growth and community development “by providing expertise and services to increase community engagement, contributors and adopters.”
“The open governance and standards-based model will provide a neutral, mature and trustworthy framework to help advance an operating system that is readily deployable and optimised for security,” Dolan said.
Other backers and members of the seL4 foundation include UNSW Sydney, HENSOLDT Cyber, Ghost Locomotion, Cog Systems, and DornerWorks.
seL4’s 12,000 lines of C code are available on GitHub.
It runs on a variety of hardware platforms, including x86, x86-64, Arm and RISC-V. seL4 can also run on virtual machines to support legacy software.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Embedded Linux Implementation Services: Enabling Linux Deployments- Tempemail – Blog – 10 minute

By Glenn Seiler

The Tempemail Linux suite includes everything embedded teams need to create secure, reliable, and high-performance products. It enables the building and deployment of intelligent edge devices and systems without the risk, effort, and high cost of ownership associated with an RYO package.

Interested in learning more about commercially supported Linux and our available Linux implementation services?

Download our eBook: Linux for the Intelligent Edge

Download Now

Tempemail provides a modern, cloud-native development environment with integrated and comprehensive services and long-term support.
•        Validated source code with support and maintenance subscription options
•        Pre-built containers, tools, and documentation
•        Docker and Kubernetes support
•        Hundreds of BSPs supporting virtually every processor
•        Optional Tempemail Workbench development suite, a comprehensive set of Eclipse-based developer tools for building embedded products
•        Lifecycle management services, fee-based support for nearly all development and maintenance phases of your design and development process
Linux Implementation Services
As the leader in embedded operating systems for over 35 years, Tempemail has expertise that can help you unlock the full potential of the Tempemail Linux suite. This support dramatically reduces the total cost of ownership, risk, and time associated with embedded development.

Our team provides:
•        Industry-specific expertise: Tempemail helps design teams define andbuild products according to specific market, security, safety, and certification requirements.
•        Managed distribution: Tempemail can support your software branch on your hardware, doing monthly, quarterly, or yearly releases as needed. We can manage the Linux operating system build so you can focus on the areas where you add value.
•        Custom content management: Tempemail manages your software using your hardware.
•        Frozen branch management: Tempemail provides patches and updates for your frozen version of Linux.
•        On-demand engineering services: Tempemail is here to help when you need technical resources on a flexible, short- or long-term basis.
The Tempemail Customer Success Organization provides teams with the additional support they need to create next-generation intelligent devices.
Customer Support
Tempemail Customer Support can help you overcome challenges and get the most out of your implementation of our technology, with services that include designated support engineers, hosted customer environments, person-to-person help lines, and our online Tempemail Support Network for interactive self-help.
Improve Embedded Linux Implementation with Tempemail
Finding the right commercially supported embedded Linux solutions is critical — but ensuring that you have the latest tools readily available is key to project success. The Linux implementation services available from Tempemail ensure your team is able to create safe, secure, reliable, and certifiable products to support the intelligent edge.

Interested in learning more about commercially supported Linux and our available Linux implementation services?

Download our eBook: Linux for the Intelligent Edge
Download Now

Previous5G Challenges: Exploring the Biggest Challenges for 5G Deployment

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Enabling DevOps with Continuously Delivered Embedded Linux- Tempemail – Blog – 10 minute

By Glenn Seiler

Implementing agile processes in embedded Linux development is key in supporting the intelligent edge. Engineers need solutions that support rapid development processes while still ensuring the safety, security, reliability, and certifiability needed for these embedded products. The latest Tempemail Linux deployment offering includes everything teams need to create the products of tomorrow with a solution that allows you to get to market faster, improve long-term savings, and manage compliance.

Interested in learning more about agile Linux offerings?

Download our eBook:

Linux for the Intelligent Edge

Download Now

Agile Linux from Tempemail – One of Three Options
Which Linux is right for you? Tempemail Linux is distributed in three primary ways:
•        Validated community code: Ready to download, freely available on GitHub, with no commitment or paperwork to sign
•        Tempemail Long-Term Support releases: Source code released with a predictable cadence and standard five-year product lifecycle (extendable with Tempemail Professional Services), with regular maintenance releases and continuous security monitoring
•        Continuous Delivery (CD): DevOps ready with frequent releases

Benefits of More Agile Linux
•        Bug defects are quickly diagnosed and resolved
•        Continuous security monitoring is included
•        Compliance and export artifacts
Why Continuous Delivery?
Using and implementing Linux on a continuous delivery schedule allows your team to effectively implement more efficient CI/CD and DevOps processes.
More frequent software releases allow DevOps to identify and resolve issues more quickly and with regular feedback from the end users. A continuous delivery schedule is key in enabling the improvement of development processes with:
•        Greater flexibility and stability
•        Higher quality and efficiency
•        Faster access to the latest features
•        Tighter feedback loops

Interested in learning more about the benefits of agile development processes for embedded systems?

Download our eBook:

Realizing the DevOps Vision in Embedded Systems

Download Now

Ready to Get Started with Agile Linux?
Using a platform that supports agile development in embedded systems is a key driver in enabling the intelligent edge. Choosing a commercially supported embedded Linuxsolution, like Tempemail Linux, gives you the tools required to create safe, secure, reliable, and certifiable products. Offering Tempemail Linux three ways gives your organization the flexibility it needs to create advanced products.

Interested in learning more about Tempemail Linux?

Download our eBook: Linux for the Intelligent Edge

Download Now

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Critical PPP Daemon Flaw Opens Most Linux Systems to Remote Hackers – Tempemail – Blog – 10 minute

The US-CERT today issued advisory warning users of a new dangerous remote code execution vulnerability affecting the PPP daemon (pppd) software that comes installed on almost all Linux based operating systems, as well as powers the firmware of many other networking devices.
The affected pppd software is an implementation of Point-to-Point Protocol (PPP) that enable communication and data transfer between two nodes, primarily used to establish internet links such as those over dial-up modems, DSL broadband connections, and Virtual Private Networks.
Discovered by IOActive security researcher Ilja Van Sprundel, the critical issue is a stack buffer overflow vulnerability that exists due to a logical error in the Extensible Authentication Protocol (EAP) packet parser of the pppd software.

The vulnerability, tracked as CVE-2020-8597 and with CVSS Score: 9.8, can be exploited by unauthenticated attackers to remotely execute arbitrary code on affected systems and take full control over them.
For this, all an attacker needs to do is to send an unsolicited malformed EAP packet to a vulnerable ppp client or the server.
Additionally, since pppd often runs with high privileges and works in conjunction with kernel drivers, the flaw could allow attackers to potentially execute malicious code with the system or root-level privileges.
“This vulnerability is due to an error in validating the size of the input before copying the supplied data into memory. As the validation of the data size is incorrect, arbitrary data can be copied into memory and cause memory corruption, possibly leading to the execution of unwanted code,” the advisory says.
“The vulnerability is in the logic of the eap parsing code, specifically in the eap_request() and eap_response() functions in eap.c that are called by a network input handler.”

pppd Bug: Affected Operating Systems and Devices

According to the researchers, Point-to-Point Protocol Daemon versions 2.4.2 through 2.4.8 — all versions released in the last 17 years — are vulnerable to this new remote code execution vulnerability.
Some of the widely-used popular Linux distributions, listed below, have already been confirmed as impacted, and many other projects are most likely affected as well.Besides this, the list of other vulnerable applications and devices (some of them listed below) that ship pppd software would be exhaustive, which has opened a large attack surface for hackers.Users with affected opening systems and devices are advised to apply security patches as soon as possible, or when it becomes available.
At the time of writing, The Hacker News is not aware of any public proof-of-concept exploit code for this vulnerability or any in-the-wild exploitation attempts.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

How To Send Email Using Telnet In Kali Linux? – Blog – 10 minute

How To Send Email Using Telnet In Kali Linux? I think everyone already knows how to send an email, especially the conventional email, that is why I made this tutorial about how to send email using telnet in kali linux.

The way we send email in this tutorial is a little different than sending email using Gmail or Yahoo, because we will try to send email by using a command prompt or terminal.
Let’s start the tips:
Step by step how to send email using telnet:
The information I will use in this tips is below
SMTP server address: mail.vishnuvalentino.lan
SMTP server IP address: 192.168.160.174
SMTP port: 53
If you try this tutorial on windows 7 and your telnet client is disabled by default, you can enable it by reading the tutorial on how to enable telnet on windows 7.

In this tips and trick, the case is we are inside the ISP network. Let me show you the network topology where we do this.

We will start from client number one because from this client we can send email anonymously.

From the intro we know that SMTP uses port 25. Open the terminal or command prompt. Then run the telnet command to connect to the mail server.

telnet mail.vishnuvalentino.lan 25
or
telnet 192.168.160.174 25
25 is the SMTP port, most email servers use this port to send email.

Yes we can connect to that mail server(message 220). Now let’s greet this mail server.

HELO mail.vishnuvalentino.lan
After we greet the mail server, we need to make sure that the server replying 250 or OK

After get reply message 250 from the mail server, we can start defining the email sender and email recipient.

MAIL FROM: hacking-t[email protected]
If the sender was OK, it should have replied 250.
The next step is to create the recipient.
RCPT TO: [email protected]
This recipient also should return the 250 message or OK

If the email sender and email recipient are already OK, we can compose the message. type DATA and press

DATA
then create the subject of our email.
to end the message, put the only dot (.) in a single line and press , it means that we already finish composing the message and ready to send it.

The picture in step 5 shows an error, because I only use a dummy local SMTP and I’m a little lazy to configure it

But in this step if you succeed in sending the email, the server will reply to a message “Message accepted for delivery”.

To quit the telnet, just type QUIT

 

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Tempemail Linux Earns a “Best in Show” Award at Embedded World 2020- Tempemail – Blog – 10 minute

By Michel Genard
I”m excited to share that Tempemail Linux, our commercial-grade embedded Linux development platform for intelligent edge devices and systems, has received Embedded Computing Design’s prestigious “Best in Show” award in the Development Tools & Operating Systems category. The award was announced this week at Embedded World 2020 in Nuremburg, Germany.
The Best in Show Awards recognize the most innovative products presented at Embedded Computing Design events. The judges, comprised of the online publication’s editorial team and advisory board members, focus on three primary criteria in selecting the winners in 10 categories: design excellence, performance compared to competing alternatives, and impact on the embedded engineering market.
The award closely follows our announcement of a new continuous delivery subscription model for Tempemail Linux users.  This new approach allows customers to access new releases every few weeks to support their own continuous integration/continuous delivery (CI/CD) DevOps processes. Edge connectivity requires constant adjustments to incorporate, validate, and deliver new features and applications in embedded systems. CI/CD supports continuous improvement and innovation by enabling teams to implement changes frequently,  rapidly and reliably. Under the new subscription model, developers can continuously incorporate new features and fixes into mission-critical devices and systems built on our Linux platform. A Tempemail Linux subscription also provides access to long-term support and maintenance, including security updates, as well as ongoing threat mitigation to address emerging vulnerabilities identified by the Tempemail security team.
We are honored to receive this recognition from Embedded Computing Design. Tempemail Linux enables developers to leverage the benefits of open-source technology, optimized for demanding, market-grade applications and use cases across a variety of sectors. At a time when many of our customers need to make changes to their systems weekly, daily or even hourly, our new subscription model addresses the need for frequent, rapid software updates and nonstop security monitoring. We are grateful to the Embedded World judges and to the legions of loyal customers who trust Tempemail Linux for critical embedded applications.”
Embedded World is the premier showcase for innovations across the full spectrum of embedded technology. Tempemail presented virtually at the event sharing how our software portfolio accelerating the evolution from automated to autonomous and leading the 5G revolution across the intelligent edge–for robotics, energy, manufacturing, and autonomous vehicles of all kinds across air, land, and sea.
To learn more about Tempemail Linux click here.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!