NVIDIA container runtime for Tempemail Linux- Tempemail – Blog – 10 minute

By Pablo Rodriguez Quesada

Introduction
Training and using AI models are tasks that demand significant computational power. Current trends are pointing more to deep neural networks, which include thousands, if not millions of operations per iteration. In the past year, more and more researchers have sounded the alarm on the exploding costs of deep learning. The computing power needed to do AI is now rising seven times faster than ever before [1]. These new needs are making hardware companies create hardware accelerators like Neural processing units, CPUs, and GPUs.
Embedded systems are not an exception to this transformation. We see every day intelligent traffic lights, autonomous vehicles, intelligent IoT devices, and more. The current direction is to have accelerators inside these embedded devices, Systems On-Chip mainly. Hardware developers have embedded small accelerators like GPUs, FPGAs, and more into SOCs, SOMs, and other systems. We call these modern systems: heterogeneous computing architectures.
The use of GPUs on Linux is not something new; we have been able to do so for many years. However, it would be great to accelerate the development and deployment of HPC applications. Containers enable portability, stability, and many other characteristics when deploying an application. For this reason, companies are investing so much in these technologies. For instance, NVIDIA recently started a project that enables CUDA on Docker [2].
One concern when dealing with containers is the loss of performance. However, when comparing the performance of the GPU with and without the containers environment, researchers found that no additional overhead is caused [3]. The consistency in the performance is one of the principal benefits of containers over virtual machines; accessing the GPU is done seamlessly as the kernel stays the constant.
NVIDIA-Docker on Yocto
Together with Matt Madison (Maintainer of meta-tegra layer), we created the required recipes to build and deploy NVIDIA-docker on Tempemail Linux LTS 19 (Yocto 3.0 Zeus).[4]
In this tutorial, you will find how to enable NVIDIA-containers on a custom distribution of Linux and run a small test application that leverages the use of GPUs inside a container.
Description
To enable NVIDIA containers, Docker needs to have the nvidia-containers-runtime which is a modified version of runc that adds a custom pre-start hook to all containers. The nvidia-containers-runtime communicates docker using the library libnvidia-container, which automatically configures GNU/Linux containers leveraging NVIDIA hardware. This library relies on kernel primitives and is designed to be agnostic of the container runtime. All the effort to port these libraries and tools to the Yocto Project was submitted to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
Note: this setup is based on Linux for Tegra and not the original Yocto Linux kernel
Benefits, and Limitations
The main benefit of GPUs inside containers is the portability and stability in the environment at the time of deployment. Of course, the development also sees benefits in having this portable environment as developers can collaborate more efficiently.
However, there are limitations due to the nature of the NVIDIA environment. Containers are heavy-weight because they are based in Linux4Tegra image that contains libraries required on runtime. On the other hand, because of redistribution limitations, some libraries are not included in the container. This requires runc to mount some property code libraries, losing portability in the process.
Prerequisites
You are required to download NVIDIA property code from their website. To do so, you will need to create an NVIDIA Developer Network account.
Go into https://developer.nvidia.com/embedded/downloads , download the NVIDIA SDK Manager, install it and download all the files for the Jetson board you own. All the effort to port these libraries and tools to the Yocto Project was submited to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
The required Jetpack version is 4.3
/opt/nvidia/sdkmanager/sdkmanager

Image 1. SDK Manager installation
If you need to include TensorRT in your builds, you must create the subdirectory and move all of the TensorRT packages downloaded by the SDK Manager there.
$ mkdir /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA
$ cp /home/$USER/Downloads/nvidia/sdkm_downloads/libnv* /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA

Creating the project
$ git clone –branch WRLINUX_10_19_BASE https://github.com/WindRiver-Labs/wrlinux-x.git
$ ./wrlinux-x/setup.sh –all-layers –dl-layers –templates feature/docker

Note: –distro wrlinux-graphics can be used for some applications that require x11.
Add meta-tegra layer
DISCAIMER: meta-tegra is a community maintained layer not supported by Tempemail at the time of writing
$ git clone https://github.com/madisongh/meta-tegra.git layers/meta-tegra
$ cd layers/meta-tegra
$ git checkout 11a02d02a7098350638d7bf3a6c1a3946d3432fd
$ cd –

Tested with: https://github.com/madisongh/meta-tegra/commit/11a02d02a7098350638d7bf3a6c1a3946d3432fd
. $ ./environment-setup-x86_64-wrlinuxsdk-linux
. $ ./oe-init-build-env

$ bitbake-layers add-layer ../layers/meta-tegra/
$ bitbake-layers add-layer ../layers/meta-tegra/contrib

Configure the project
$ echo “BB_NO_NETWORK = ‘0’” >> conf/local.conf
$ echo ‘INHERIT_DISTRO_remove = “whitelist”‘ >> conf/local.conf

Set the machine to your Jetson Board
$ echo “MACHINE=’jetson-nano-qspi-sd'” >> conf/local.conf
$ echo “PREFERRED_PROVIDER_virtual/kernel = ‘linux-tegra'” >> conf/local.conf

CUDA cannot be compiled with GCC versions higher than 7. Set GCC version to 7.%:
$ echo ‘GCCVERSION = “7.%”‘ >> conf/local.conf
$ echo “require contrib/conf/include/gcc-compat.conf” >> conf/local.conf

Set the IMAGE export type to tegraflash for ease of deployment.
$ echo ‘IMAGE_CLASSES += “image_types_tegra”‘ >> conf/local.conf
$ echo ‘IMAGE_FSTYPES = “tegraflash”‘ >> conf/local.conf

Change the docker version, add nvidia-container-runtime.
$ echo ‘IMAGE_INSTALL_remove = “docker”‘ >> conf/local.conf
$ echo ‘IMAGE_INSTALL_append = ” docker-ce”‘ >> conf/local.conf

Fix tini build error
$ echo ‘SECURITY_CFLAGS_pn-tini_append = ” ${SECURITY_NOPIE_CFLAGS}”‘ >> conf/local.conf

Set NVIDIA download location
$ echo “NVIDIA_DEVNET_MIRROR=’file:///home/$USER/Downloads/nvidia/sdkm_downloads'” >> conf/local.conf
$ echo ‘CUDA_BINARIES_NATIVE = “cuda-binaries-ubuntu1604-native”‘ >> conf/local.conf

Add the Nvidia containers runtime, AI libraries and the AI libraries CSV files
$ echo ‘IMAGE_INSTALL_append = ” nvidia-docker nvidia-container-runtime cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-container-csv cudnn-container-csv tensorrt-container-csv libvisionworks-container-csv libvisionworks-sfm-container-csv libvisionworks-tracking-container-csv”‘ >> conf/local.conf

Enable ldconfig required by the nvidia-container-runtime
$ echo ‘DISTRO_FEATURES_append = ” ldconfig”‘ >> conf/local.conf

Build the project
$ bitbake wrlinux-image-glibc-std

Burn the image into the SD card
$ unzip wrlinux-image-glibc-std-sato-jetson-nano-qspi-sd-20200226004915.tegraflash.zip -d wrlinux-jetson-nano
$ cd wrlinux-jetson-nano

Connect the Jetson Board to your computer using the micro USB cable as shown in the image:
Image 2. Recovery mode setup for Jetson Nano
Image 3. Pins Diagram for Jetson Nano
After connecting the board, run:
$ sudo ./dosdcard.sh

This command will create the file wrlinux-image-glibc-std.sdcard that contains the SD card image required to boot.
Burn the Image to the SD Card:
$ sudo dd if=wrlinux-image-glibc-std.sdcard of=/dev/***** bs=8k

Warning: substitute the of= device to the one that points to your sdcardFailure to do so can lead to unexpected erase of hard disks
Deploy the target
Boot up the board and find the ip address with the command ifconfig.
Then, ssh into the machine and run docker:
$ ssh [email protected]

Create tensorflow_demo.py using the example from the “Train and evaluate with Keras” section in the Tensorflow documentation:
#!/usr/bin/python3
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

inputs = keras.Input(shape=(784,), name=’digits’)
x = layers.Dense(64, activation=’relu’, name=’dense_1′)(inputs)
x = layers.Dense(64, activation=’relu’, name=’dense_2′)(x)
outputs = layers.Dense(10, name=’predictions’)(x)

model = keras.Model(inputs=inputs, outputs=outputs)

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(60000, 784).astype(‘float32’) / 255
x_test = x_test.reshape(10000, 784).astype(‘float32’) / 255

y_train = y_train.astype(‘float32’)
y_test = y_test.astype(‘float32’)

# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]

model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# List of metrics to monitor
metrics=[‘sparse_categorical_accuracy’])

print(‘# Fit model on training data’)
history = model.fit(x_train, y_train,
batch_size=64,
epochs=3,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val))

print(‘nhistory dict:’, history.history)

# Evaluate the model on the test data using `evaluate`
print(‘n# Evaluate on test data’)
results = model.evaluate(x_test, y_test, batch_size=128)
print(‘test loss, test acc:’, results)

# Generate predictions (probabilities — the output of the last layer)
# on new data using `predict`
print(‘n# Generate predictions for 3 samples’)
predictions = model.predict(x_test[:3])
print(‘predictions shape:’, predictions.shape)

Create a Dockerfile:
FROM tianxiang84/l4t-base:all

WORKDIR /root
COPY tensorflow_demo.py .

ENTRYPOINT [“/usr/bin/python3”]
CMD [“/root/tensorflow_demo.py”]

Build the container:
# docker build -t l4t-tensorflow .

Run the container:
# docker run –runtime nvidia -it l4t-tensorflow

Results
Note the use of the GPU0:
2020-04-22 21:13:56.969319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-04-22 21:13:58.210600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 268 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)

Conclusions
The use of NVIDIA-containers allows a smooth deployment of AI applications. Once you have your Linux distribution running containers with the custom NVIDIA runtime, getting a Neural Network to work is as simple as running one command. Getting an NVIDIA Tegra board to run computing-intensive workloads is now easier than ever.
With the provided custom runc engine that allows the use of CUDA and other related libraries, you will be running applications as if they were on bare-metal.
One of the possibilities the containers offer is combining this setup with Kubernetes or the NVIDIA EGX Platform so that you can do the orchestration. The Kubernetes Device Plugins distribute and manage workloads across multiple acceleration devices, giving you high availability as well as other benefits. Combined with other technologies such as Tensorflow and OpenCV, and you will have an army of edge devices ready to run your Intelligent applications for you.
References

[1] K. Hao, “The computing power needed to train AI is now rising seven times faster than ever before”, MIT Technology Review, Nov. 2019. [Online]. Available: https://www.technologyreview.com/s/614700/the- computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before.
[2] Nvidia, nvidia-docker, [Online; accessed 15. Mar. 2020], Feb. 2020. [Online]. Available:https://github.com/NVIDIA/nvidia-docker.
[3] L. Benedicic and M. Gila, “Accessing gpus from containers in hpc”, 2016. [Online]. Available: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post187s2-file3.pdf.
[4] M. Madison, Container runtime for master, [Online; accessed 30. Mar. 2020], Mar. 2020. [Online]. Available:https://github.com/madisongh/meta-tegra/pull/266

All product names, logos, and brands are property of their respective owners.All company, product and service names used in this software are for identification purposes only. Tempemail are registered trademarks of Tempemail Systems.
Disclaimer of Warranty / No Support: Tempemail does not provide support and maintenance services for this software, under Tempemail’s standard Software Support and Maintenance Agreement or otherwise. Unless required by applicable law, Tempemail provides the software (and each contributor provides its contribution) on an “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, either express or implied, including, without limitation, any warranties of TITLE, NONINFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the software and assume ay risks associated with your exercise of permissions under the license.
TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
Docker is a trademark of Docker, Inc.
NVIDIA, NVIDIA EGX, CUDA, Jetson, and Tegra are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

NVIDIA container runtime for Tempemail Linux- Tempemail – Blog – 10 minute

By Pablo Rodriguez Quesada

Introduction
Training and using AI models are tasks that demand significant computational power. Current trends are pointing more to deep neural networks, which include thousands, if not millions of operations per iteration. In the past year, more and more researchers have sounded the alarm on the exploding costs of deep learning. The computing power needed to do AI is now rising seven times faster than ever before [1]. These new needs are making hardware companies create hardware accelerators like Neural processing units, CPUs, and GPUs.
Embedded systems are not an exception to this transformation. We see every day intelligent traffic lights, autonomous vehicles, intelligent IoT devices, and more. The current direction is to have accelerators inside these embedded devices, Systems On-Chip mainly. Hardware developers have embedded small accelerators like GPUs, FPGAs, and more into SOCs, SOMs, and other systems. We call these modern systems: heterogeneous computing architectures.
The use of GPUs on Linux is not something new; we have been able to do so for many years. However, it would be great to accelerate the development and deployment of HPC applications. Containers enable portability, stability, and many other characteristics when deploying an application. For this reason, companies are investing so much in these technologies. For instance, NVIDIA recently started a project that enables CUDA on Docker [2].
One concern when dealing with containers is the loss of performance. However, when comparing the performance of the GPU with and without the containers environment, researchers found that no additional overhead is caused [3]. The consistency in the performance is one of the principal benefits of containers over virtual machines; accessing the GPU is done seamlessly as the kernel stays the constant.
NVIDIA-Docker on Yocto
Together with Matt Madison (Maintainer of meta-tegra layer), we created the required recipes to build and deploy NVIDIA-docker on Tempemail Linux LTS 19 (Yocto 3.0 Zeus).[4]
In this tutorial, you will find how to enable NVIDIA-containers on a custom distribution of Linux and run a small test application that leverages the use of GPUs inside a container.
Description
To enable NVIDIA containers, Docker needs to have the nvidia-containers-runtime which is a modified version of runc that adds a custom pre-start hook to all containers. The nvidia-containers-runtime communicates docker using the library libnvidia-container, which automatically configures GNU/Linux containers leveraging NVIDIA hardware. This library relies on kernel primitives and is designed to be agnostic of the container runtime. All the effort to port these libraries and tools to the Yocto Project was submitted to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
Note: this setup is based on Linux for Tegra and not the original Yocto Linux kernel
Benefits, and Limitations
The main benefit of GPUs inside containers is the portability and stability in the environment at the time of deployment. Of course, the development also sees benefits in having this portable environment as developers can collaborate more efficiently.
However, there are limitations due to the nature of the NVIDIA environment. Containers are heavy-weight because they are based in Linux4Tegra image that contains libraries required on runtime. On the other hand, because of redistribution limitations, some libraries are not included in the container. This requires runc to mount some property code libraries, losing portability in the process.
Prerequisites
You are required to download NVIDIA property code from their website. To do so, you will need to create an NVIDIA Developer Network account.
Go into https://developer.nvidia.com/embedded/downloads , download the NVIDIA SDK Manager, install it and download all the files for the Jetson board you own. All the effort to port these libraries and tools to the Yocto Project was submited to the community and now is part of the meta-tegra layer which is maintained by Matt Madison.
The required Jetpack version is 4.3
/opt/nvidia/sdkmanager/sdkmanager

Image 1. SDK Manager installation
If you need to include TensorRT in your builds, you must create the subdirectory and move all of the TensorRT packages downloaded by the SDK Manager there.
$ mkdir /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA
$ cp /home/$USER/Downloads/nvidia/sdkm_downloads/libnv* /home/$USER/Downloads/nvidia/sdkm_downloads/NoDLA

Creating the project
$ git clone –branch WRLINUX_10_19_BASE https://github.com/WindRiver-Labs/wrlinux-x.git
$ ./wrlinux-x/setup.sh –all-layers –dl-layers –templates feature/docker

Note: –distro wrlinux-graphics can be used for some applications that require x11.
Add meta-tegra layer
DISCAIMER: meta-tegra is a community maintained layer not supported by Tempemail at the time of writing
$ git clone https://github.com/madisongh/meta-tegra.git layers/meta-tegra
$ cd layers/meta-tegra
$ git checkout 11a02d02a7098350638d7bf3a6c1a3946d3432fd
$ cd –

Tested with: https://github.com/madisongh/meta-tegra/commit/11a02d02a7098350638d7bf3a6c1a3946d3432fd
. $ ./environment-setup-x86_64-wrlinuxsdk-linux
. $ ./oe-init-build-env

$ bitbake-layers add-layer ../layers/meta-tegra/
$ bitbake-layers add-layer ../layers/meta-tegra/contrib

Configure the project
$ echo “BB_NO_NETWORK = ‘0’” >> conf/local.conf
$ echo ‘INHERIT_DISTRO_remove = “whitelist”‘ >> conf/local.conf

Set the machine to your Jetson Board
$ echo “MACHINE=’jetson-nano-qspi-sd'” >> conf/local.conf
$ echo “PREFERRED_PROVIDER_virtual/kernel = ‘linux-tegra'” >> conf/local.conf

CUDA cannot be compiled with GCC versions higher than 7. Set GCC version to 7.%:
$ echo ‘GCCVERSION = “7.%”‘ >> conf/local.conf
$ echo “require contrib/conf/include/gcc-compat.conf” >> conf/local.conf

Set the IMAGE export type to tegraflash for ease of deployment.
$ echo ‘IMAGE_CLASSES += “image_types_tegra”‘ >> conf/local.conf
$ echo ‘IMAGE_FSTYPES = “tegraflash”‘ >> conf/local.conf

Change the docker version, add nvidia-container-runtime.
$ echo ‘IMAGE_INSTALL_remove = “docker”‘ >> conf/local.conf
$ echo ‘IMAGE_INSTALL_append = ” docker-ce”‘ >> conf/local.conf

Fix tini build error
$ echo ‘SECURITY_CFLAGS_pn-tini_append = ” ${SECURITY_NOPIE_CFLAGS}”‘ >> conf/local.conf

Set NVIDIA download location
$ echo “NVIDIA_DEVNET_MIRROR=’file:///home/$USER/Downloads/nvidia/sdkm_downloads'” >> conf/local.conf
$ echo ‘CUDA_BINARIES_NATIVE = “cuda-binaries-ubuntu1604-native”‘ >> conf/local.conf

Add the Nvidia containers runtime, AI libraries and the AI libraries CSV files
$ echo ‘IMAGE_INSTALL_append = ” nvidia-docker nvidia-container-runtime cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-container-csv cudnn-container-csv tensorrt-container-csv libvisionworks-container-csv libvisionworks-sfm-container-csv libvisionworks-tracking-container-csv”‘ >> conf/local.conf

Enable ldconfig required by the nvidia-container-runtime
$ echo ‘DISTRO_FEATURES_append = ” ldconfig”‘ >> conf/local.conf

Build the project
$ bitbake wrlinux-image-glibc-std

Burn the image into the SD card
$ unzip wrlinux-image-glibc-std-sato-jetson-nano-qspi-sd-20200226004915.tegraflash.zip -d wrlinux-jetson-nano
$ cd wrlinux-jetson-nano

Connect the Jetson Board to your computer using the micro USB cable as shown in the image:
Image 2. Recovery mode setup for Jetson Nano
Image 3. Pins Diagram for Jetson Nano
After connecting the board, run:
$ sudo ./dosdcard.sh

This command will create the file wrlinux-image-glibc-std.sdcard that contains the SD card image required to boot.
Burn the Image to the SD Card:
$ sudo dd if=wrlinux-image-glibc-std.sdcard of=/dev/***** bs=8k

Warning: substitute the of= device to the one that points to your sdcardFailure to do so can lead to unexpected erase of hard disks
Deploy the target
Boot up the board and find the ip address with the command ifconfig.
Then, ssh into the machine and run docker:
$ ssh [email protected]

Create tensorflow_demo.py using the example from the “Train and evaluate with Keras” section in the Tensorflow documentation:
#!/usr/bin/python3
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

inputs = keras.Input(shape=(784,), name=’digits’)
x = layers.Dense(64, activation=’relu’, name=’dense_1′)(inputs)
x = layers.Dense(64, activation=’relu’, name=’dense_2′)(x)
outputs = layers.Dense(10, name=’predictions’)(x)

model = keras.Model(inputs=inputs, outputs=outputs)

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(60000, 784).astype(‘float32’) / 255
x_test = x_test.reshape(10000, 784).astype(‘float32’) / 255

y_train = y_train.astype(‘float32’)
y_test = y_test.astype(‘float32’)

# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]

model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# List of metrics to monitor
metrics=[‘sparse_categorical_accuracy’])

print(‘# Fit model on training data’)
history = model.fit(x_train, y_train,
batch_size=64,
epochs=3,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val))

print(‘nhistory dict:’, history.history)

# Evaluate the model on the test data using `evaluate`
print(‘n# Evaluate on test data’)
results = model.evaluate(x_test, y_test, batch_size=128)
print(‘test loss, test acc:’, results)

# Generate predictions (probabilities — the output of the last layer)
# on new data using `predict`
print(‘n# Generate predictions for 3 samples’)
predictions = model.predict(x_test[:3])
print(‘predictions shape:’, predictions.shape)

Create a Dockerfile:
FROM tianxiang84/l4t-base:all

WORKDIR /root
COPY tensorflow_demo.py .

ENTRYPOINT [“/usr/bin/python3”]
CMD [“/root/tensorflow_demo.py”]

Build the container:
# docker build -t l4t-tensorflow .

Run the container:
# docker run –runtime nvidia -it l4t-tensorflow

Results
Note the use of the GPU0:
2020-04-22 21:13:56.969319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-04-22 21:13:58.210600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 268 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)

Conclusions
The use of NVIDIA-containers allows a smooth deployment of AI applications. Once you have your Linux distribution running containers with the custom NVIDIA runtime, getting a Neural Network to work is as simple as running one command. Getting an NVIDIA Tegra board to run computing-intensive workloads is now easier than ever.
With the provided custom runc engine that allows the use of CUDA and other related libraries, you will be running applications as if they were on bare-metal.
One of the possibilities the containers offer is combining this setup with Kubernetes or the NVIDIA EGX Platform so that you can do the orchestration. The Kubernetes Device Plugins distribute and manage workloads across multiple acceleration devices, giving you high availability as well as other benefits. Combined with other technologies such as Tensorflow and OpenCV, and you will have an army of edge devices ready to run your Intelligent applications for you.
References

[1] K. Hao, “The computing power needed to train AI is now rising seven times faster than ever before”, MIT Technology Review, Nov. 2019. [Online]. Available: https://www.technologyreview.com/s/614700/the- computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before.
[2] Nvidia, nvidia-docker, [Online; accessed 15. Mar. 2020], Feb. 2020. [Online]. Available:https://github.com/NVIDIA/nvidia-docker.
[3] L. Benedicic and M. Gila, “Accessing gpus from containers in hpc”, 2016. [Online]. Available: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post187s2-file3.pdf.
[4] M. Madison, Container runtime for master, [Online; accessed 30. Mar. 2020], Mar. 2020. [Online]. Available:https://github.com/madisongh/meta-tegra/pull/266

All product names, logos, and brands are property of their respective owners.All company, product and service names used in this software are for identification purposes only. Tempemail are registered trademarks of Tempemail Systems.
Disclaimer of Warranty / No Support: Tempemail does not provide support and maintenance services for this software, under Tempemail’s standard Software Support and Maintenance Agreement or otherwise. Unless required by applicable law, Tempemail provides the software (and each contributor provides its contribution) on an “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, either express or implied, including, without limitation, any warranties of TITLE, NONINFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the software and assume ay risks associated with your exercise of permissions under the license.
TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
Docker is a trademark of Docker, Inc.
NVIDIA, NVIDIA EGX, CUDA, Jetson, and Tegra are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

IDC-Technology Spotlight-Optimize Application Performance on Public Clouds with Integrated Container Ops- Tempemail – Blog – 10 minute

Read Article
Enterprises today are quick to adopt containers for new and existing applications. It’s estimated that by the end of 2021, enterprise customers will run at least 50% of their containerized applications on public cloud container platforms and infrastructure services.
Wondering about the benefits of deploying containerized apps on public cloud infrastructure?• Build once, deploy anywhere• Integrate infrastructure and operations• Enhance developer’s ability to focus on innovation, and more
Read IDC’s “Optimize Application Performance on Public Clouds with Integrated Container Ops” to know more.

If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Cloudera selects Red Hat OpenShift Container Solution for its Data Platform Private Cloud- Tempemail – Blog – 10 minute

Read Article
Cloudera, the enterprise data cloud company, today announced that it has chosen Red Hat OpenShift as the preferred container solution for Cloudera Data Platform (CDP) Private Cloud. With Red Hat OpenShift, CDP Private Cloud will deliver powerful, self-service analytics and enterprise-grade performance with the granular security and governance policies that IT leaders demand.
To keep pace in the digital era, businesses must modernize their data strategy for increased agility, ease of use and efficiency. Together, Red Hat OpenShift and CDP Private Cloud help create an essential hybrid, multi-cloud data architecture, enabling teams to rapidly onboard mission-critical applications and run them anywhere, without disrupting existing ones.
“Red Hat OpenShift’s position as the market-leading Kubernetes container platform, combined with its 100% open source nature, make it ideal for CDP Private Cloud,” said Arun Murthy, Chief Product Officer, Cloudera. “CDP Private Cloud, supported by Red Hat OpenShift, creates an enterprise data cloud with a powerful hybrid architecture that separates compute and storage for greater agility, ease of use, and more efficient use of private and public cloud infrastructure.”
“We’re pleased to expand access to the open hybrid cloud by becoming the preferred container solution for the Cloudera Data Platform Private Cloud,” said Ashesh Badani, Senior Vice President, Red Hat. “The combination of Red Hat OpenShift and CDP Private Cloud aims to deliver a next-generation data analytics platform for on-premise deployments that can help to transform complex data into clearer and more actionable enterprise insights.”
To learn more about CDP Private Cloud and the partnership between Cloudera and IBM, tune in to Red Hat Summit on April 29 at 7pm ET, where Cloudera will join Red Hat CTO Chris Wright as part of the ecosystem keynote presentation.

If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Kale Logistics develops Container Digital Exchange Platform on Microsoft Azure- Tempemail – Blog – 10 minute

Read Article

In its mission to empower every person and every organization to achieve more, Microsoft has collaborated with Kale Logistics to enable the digital transformation of Tuticorin Port, the fourth-largest container terminal and a major all-weather port in India. Powered by Microsoft Azure, the digitization of the Tuticorin Port has helped reduce documentation, digitally streamline movement and also considerably reduce carbon emissions by 75 percent, making Tuticorin Port one of the first ports in India to completely digitally transform and automate its operations and processes.
The adoption of CODEX has unlocked multiple industry first benefits for Tuticorn Port which have not been experienced by any other port in India. For instance, the trucks with containers don’t have carry multiple documents to enter the port, which has drastically cut down the time spent by every truck. This enabled them to cut down their fuel consumption from 15 litres to just 8 litres. Every truck which used to enter had to go through as many as 10 different stakeholders, each of which had different documentation processes which would go on to have as many as 40 documents, more than 100 annexure copies, and 120 signatures. CODEX provided a digital gate pass for containers – a single digital document that was mapped to individual containers via a barcode—that was accessible to every stakeholder to view and fill. As a container moved from one checkpoint to the other, a combination of desktop systems, mobile apps and scanners would ensure that the information from the digital gate pass is directly read.
With overwhelming results and feedback from all the stakeholders, Kale Logistics integrated the GST refund system into CODEX. GST refunds filed through CODEX had an accuracy of about 95 percent which made it easier to get approved. Consequently, the returns were processed faster, and funds started flowing in less than a week instead of taking three months or more. Quick IGST refund, live container track and trace mechanism, seamless container movement and high transparency add to the nation’s image in terms of ease of doing business. These are some of the reasons why the Ministry of Commerce, and the Ministry of Finance, are considering to replicate CODEX’s success across the country.

If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

10 minutes Tempemail – Also known by names like : 10minemail, 10minutemail, 10mins email, Tempemail 10 minutes, 10 minute e-Tempemail, 10min Tempemail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. Tempemail.co – is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something anonymously on Internet.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!