Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Join Canonical at 2024 GTC AI Conference

Felicia Jia

on 29 February 2024

Tags: AI/ML , edge , Event , gtc , IoT , nvidia , partners

As a key technology partner with NVIDIA, Canonical is proud to showcase our joint solutions at NVIDIA GTC again. Join us in person at NVIDIA GTC on March 18-21, 2024 to explore what’s next in AI and accelerated computing. We will be at booth 1601 in the MLOps & LLMOps Pavilion, demonstrating how open source AI solutions can take your models to production, from edge to cloud.

Register for GTC now!

AI on Ubuntu – from cloud to edge

As the world becomes more connected, there is a growing need to extend data processing beyond the data centre to edge devices in the field. As we all know, cloud computing provides numerous resources for AI adoption, processing, storage, and analysis, but it cannot support every use case.  Deploying models to edge devices can expand the scope of AI devices by enabling you to process some of the data locally and achieve real-time insights without relying exclusively on the centralised data centre or cloud. This is especially relevant when AI applications would be impractical or impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy. 

Therefore, a solution that enables scalability, reproducibility, and portability is the ideal choice for a production-grade project.  Canonical delivers a comprehensive AI stack with the open source software which your organisation might need for your AI projects from cloud to edge, giving you:

  • The same experience on edge devices and on any cloud, whether private or public or hybrid
  • Low-ops, streamlined lifecycle management
  • A modular and open source suite for reusable deployments

Book a meeting with us

To put our AI stack to the test, during NVIDIA GTC 2024, we will present how our Kubernetes-based AI infrastructure solutions can help create a blueprint for smart cities, leveraging best-in-class NVIDIA hardware capabilities. We will cover both training in the cloud and data centres, and showcase the solution deployed at the edge on Jetson Orin based devices. Please check out the details below and meet our expert on-site.

Canonical’s invited talk at GTC

Accelerate Smart City Edge AI Deployment With Open-Source Cloud-Native Infrastructure [S61494]

Abstract:

Artificial intelligence is no longer confined to data centres; it has expanded to operate at the edge. Some models require low latency, necessitating execution close to end-users. This is where edge computing, optimised for AI, becomes essential. In the most popular use cases for modern smart cities, many envision city-wide assistants deployed as “point-of-contact” devices that are available on bus stops, subways, etc. They interact with backend infrastructure to take care of changing conditions while users travel around the city. That creates a need to process local data gathered from infrastructure like internet-of-things gateways, smart cameras, or buses. Thanks to NVIDIA Jetson modules, these data can be processed locally for fast, low-latency AI-driven insights. Then, as device-local computational capabilities are limited, data processing should be offloaded to the edge or backend infrastructure. With the power of Tegra SoC, data can first be aggregated at the edge devices to be later sent to the cloud for further processing. Open-source deployment mechanisms enable such complex setups through automated management, Day 2 operations, and security. Canonical, working alongside NVIDIA, has developed an open-source software infrastructure that simplifies the deployment of multiple Kubernetes clusters at the edge with access to GPU. We’ll go over those mechanisms, and how they orchestrate the deployment of Kubernetes-based AI/machine learning infrastructure across the smart cities blueprint to profit from NVIDIA hardware capabilities, both on devices and cloud instances.

Presenter: Gustavo Sanchez, AI Solutions Architect, Canonical

Date & Time: March 20, 10:00am – 10:25am PDT

Location: Marriott Ballroom 5 (L2)

Listen to our recent podcast to learn more about the the use cases, challenges and key considerations for such a use case.

Build and scale your AI projects with Canonical and NVIDIA

Starting a deep learning pilot within an enterprise has its set of challenges, but scaling projects to production-grade deployments  brings a host of additional difficulties. These chiefly relate to the increased hardware, software, and operational requirements that come with larger and more complex initiatives.

Canonical and NVIDIA offer an integrated end-to-end solution – from a hardware optimised Ubuntu to application orchestration and MLOps. We enable organisations to develop, optimise and scale ML workloads.

Canonical will showcase 3 demos to walk you through our joint solutions with NVIDIA on AI/ML:

  • Accelerate smart city Edge AI deployments with open-source cloud-native infrastructure – Striving for an architecture to solve Edge AI challenges like software efficiency, security, monitoring and day 2 operations. Canonical, working alongside with NVIDIA, has developed an open-source software infrastructure that simplifies training on private and public clouds as well deployments and operations of AI models on clusters at the edge with access to NVIDIA GPU capabilities.
  • End-to-end MLOps with Hybrid Cloud capable Open-Source tooling –  Cost optimization, data privacy, and HPC performance on GPUs are some of the reasons companies have to consider private cloud, hybrid cloud and multi cloud solutions for their Data and AI infrastructure. Open-source cloud agnostic infrastructure for Machine Learning Operations gives companies flexibility to expand beyond public cloud vendor lock-ins, alignment with restricted data compliance constraints and capabilities to take full advantage of their hardware resources, while automating day to day operations.
  • LLM and RAG open-source infrastructure – This demo shows an implementation of an end-to-end  solution from data collection and cleaning to training and inference usage of an open-source large language model integrated using the retrieval augmented generation technique on an open-source vector database. It shows how to scrape information out of your publicly available company website to be embedded into the vector database and to be consumed by the LLM model.

Visit our Canonical booth 1601 at GTC to check them out.

Come and meet us at NVIDIA GTC 2024

If you are interested in building or scaling your AI projects with open source solutions, we are here to help you. Visit ubuntu.com/nvidia to explore our joint data centre offerings.

Book a meeting with us

Learn more about our joint solutions

Explore Canonical & Ubuntu at Past GTCs

smart start

IoT as a service

Bring an IoT device to market fast. Focus on your apps, we handle the rest. Canonical offers hardware bring up, app integration, knowledge transfer and engineering support to get your first device to market. App store and security updates guaranteed.

Get your IoT device to market fast ›

smart start logo

IoT app store

Build a platform ecosystem for connected devices to unlock new avenues for revenue generation. Get a secure, hosted and managed multi-tenant app store for your IoT devices.

Build your IoT app ecosystem ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Canonical joins OPEA to enable Enterprise AI

Canonical is committed to enabling organizations to secure and scale their AI/ML projects in production. This is why we are pleased to announce that we have...

Let’s meet at World Summit AI and talk about open source and AI tooling, with a dash of GenAI

Date: 9-10 October 2024 Booth: B8 After Data & AI Masters, we cross the North Sea to attend one of the leading AI events inEurope. Between the 9th and 10th of...

Meet our Public Sector team at Technet Augusta 2024

We’re excited to announce our participation in Technet Augusta 2024 from 19 to 22 August.