- Synthetic
- Posts
- Synthetic: Nvidia's amazing AI chip and DeepMind's AI football coach
Synthetic: Nvidia's amazing AI chip and DeepMind's AI football coach
Elevating your inbox's IQ
This week in AI
Welcome to Synthetic, the newsletter on all things AI. Please consider sharing this edition with others who might find it useful.
This article from The Verge gives a brief summary of the hot new chip from Nvidia. And you can click on the video link in the section below to watch a 16-minute supercut of Jensen Huang’s two-hour keynote at Nvidia’s GTC event this week.
Here is Synthetic’s perspective, with our key insights from the event:
Blackwell, their new multi-die GPU chip features 208 billion transistors and is 2.5X-6X higher performance and has 5X the token generation capability of Hopper, their previous-generation H100 chip that until now was the darling of the generative AI world.
Blackwell is the name of a chip, a platform, and an architecture.
Their new DGX computer, which builds many Blackwell chips into a rack, can deliver 720 PFLOPS of training performance and 1.4 ExaFLOPS of inference performance. That’s a LOT. It makes it the first ExaFLOPS machine in a single rack. Nvidia used lots of system level advancements to boost Blackwell platform performance by 22X-45X performance over Hopper. That’s a huge leap. This kind of performance will be needed to train next generation models with 15-20 trillion parameters (by comparison, GPT-4 is believed to be about 1.8 trillion parameters).
A Blackwell machine of similar performance to Hopper would consumer ¼ the power.
Nvidia announced plans to build a data center for Amazon Web Services with 32,000 Blackwell GPUs, delivering 654 ExaFLOPS of performance. Yowsa.
The future of AI data centers is liquid cooling. Check out two-phase cooling technology from WiWynn if you want to learn more.
Customer support for the platform was incredible. Nvidia is partnering with anyone who’s anyone.
Nvidia announced a new way to package optimized models that makes them easier to use, called NIMs (Nvidia Inference Microservice). The catch: they only run on Nvidia chips.
Nvidia is going BIG on robots, specifically humanoid robots. They plan to build out an entire software/hardware stack, including a foundational model, named Project GR00T, that learns from watching human examples or by watching videos. They also talked a lot about Omniverse and Isaac, capabilities that can be used to develop, simulate, and deploy robots. Check out Nvidia’s cool robot video (3m 42s). Prepare to bow down before your robot overlords.
For more insight on Nvidia’s big launch event, check out Stratechery’s further analysis.
It seems pretty clear at this stage that iOS 18 will be chock-full of AI features, but how will Apple deliver them at the scale required? While Apple has made more AI acquisitions than anyone else in the last 12 months, and they are building their own models and AI capabilities, they are apparently talking with Google about using Gemini, and have also considered bringing capabilities to the iPhone based on ChatGPT. Time will tell, and we will likely all find out what Apple has cooking at WWDC in June.
Supercut: Nvidia’s GTC keynote in 16 minutes
If you didn’t have the two hours needed to tune into Nvidia’s GTC conference keynote this week, well, this video is for you. Jensen Huang’s keynote is relatively technical, but if you are a technologist, or tech-curious, catch all the highlights here.
AI Innovation
In another piece of cool research from the labs of Google DeepMind, researchers show how they worked closely with football stars, Liverpool F.C., for several years to develop an AI assistant for coaches. The assistant is particularly good at helping to optimize corner kicks. This work could scale to other sports and will likely have application beyond sports as researchers strive to build AIs that can reason and plan.
Fun insight: Liverpool F.C. just happens to be Google Deepmind CEO, Demis Hassabis’, favorite team
This week, Open AI CEO, Sam Altman, returned to the Lex Fridman podcast for a wide-ranging, two-hour-long conversation. Altman denied that a secret research breakthrough known as Q*, thought to bring basic mathematical reasoning capabilities to AI, had led to the clandestine development of AGI, or that it was what led to his temporary ouster as Open AI’s leader. He declined to share any more information on Q* but went on to say that he expects GPT-5 to be as big a leap over GPT-4, as GPT-4 was over GPT-3, and that something exciting is coming from Open AI later this year. The article includes a link to the full interview for your viewing/listening pleasure.
AI Insights
Sora, Open AI’s video generation model, was released about a month ago, and caused quite a stir. This article argues that Sora is more than a video generator, and should be thought of as a primitive world model, with understanding of style, character, objects, scenery, movement, and how the physical world works. This was Open AI’s intent when building Sora as a research project; an attempt to build a model that understands the world we live in. The road to AGI will be paved by impressive work like this to fill in the gaps in AI’s understanding.
In a surprise announcement, Microsoft revealed it has hired Mustafa Suleyman, co-founder of DeepMind and CEO/Founder of Inflection AI, as the CEO of their new consumer AI division, Microsoft AI. This means that Suleyman, who does not have a technical background, will oversee development of all Microsoft’s consumer-facing AI products including Bing, Edge, and their range of AI assistants branded as Copilot. Sean White will be taking over Suleyman’s CEO role at Inflection AI.
Google has long showed interest in bringing its powerful technology to healthcare. Executives speaking at their latest health-oriented event, The Check Up (full video of the event is here), outlined plans to fine tune their Gemini model for medical applications and to build personal large language models to power health and wellness features in their Fitbit platform. Earlier this year, Google showed its AMIE model, a research AI built to diagnose ailments by chatting with patients. In blind tests, testers found AMIE to deliver higher quality diagnoses and more empathetic responses than human Primary Care Practitioners.
Another cracking article from Harvard Business Review, this time about the latest thinking on the use of machine learning in supply chains. The case study discusses the flaws with existing supply chain optimization techniques and proposes a new methodology, Optimal Machine Learning (OML), that the authors claim overcomes the shortcomings of existing approaches. The article explores the new approach and includes two case studies on OML use in a semiconductor company and a consumer electronics company. TLDR: they made better use of data and forecasts to make decisions. If supply chains are your jam, this article is for you.
Toolkit for the Future
AI-Powered Creativity
Instant, polished presentations powered by AI. Impress your audience effortlessly with Gamma. Engage users on any device. Measure engagement, get quick reactions, and collaborate seamlessly.
The Laxis AI Meeting Assistant allows your sales team to stay focused on their customers during meetings, capturing each attendee’s comments verbatim and flagging items for follow up. It saves your revenue team time as it auto-generates meeting summaries and follow-up emails in seconds, quickly identifying customer requirements, pain points, and action items. And it’ll update your CRM in one click. Works with Google Meet, Zoom, Webex, Teams, or with a simple voice recording. Rated 4.9/5 on G2, and it’s free to try.
This tool is pretty impressive. In minutes, train their robot to extract specific data from websites and place it in a spreadsheet that fills itself. The robot will notify you of changes in that data, and there are many pre-built robots that make it a breeze to get started. So whether you need to extract job listings from LinkedIn, Monster and Upwork, property details from Zillow, company information from Clutch, videos and comment from TikTok, hotel prices and reviews from Booking.com, or to gather competitor pricing, Browse AI has you covered. Easy to use; free to try; no coding required!
I travel quite a bit, so I’ve always got armfuls of receipts in my bag that need to be scanned, categorized, and cataloged. Shoeboxed makes it easy. Capture receipts on the go using their app, forward email receipts, or grab them using the web portal. They also have an option to mail in a pile of physical receipts. Their AI saves you time by extracting all the information you need from the receipt: vendor, amount, sales tax, location, and so on, and organizing them in an easy-to-search database complete with IRS-approved, secure image scans. Shoeboxed makes it easy to create expense reports and output expense data to spreadsheets. Perfect for freelancers/individuals or larger businesses with many users. Chosen as Hubspot’s #1 choice for tax season prep and the best receipt tracking app for emailed receipts by Forbes, Shoeboxed also has 4.4/5 stars on Capterra and a score of 4.5/5 on Techradar Pro. Try it FREE for 30 days with this link.
I talk a lot about AI assistants, both in my keynotes and also in my book, The Innovation Ultimatum. This versatile AI assistant from Maika AI is built for content creators, which these days means most of us. It’s free to try and helps you to write text, edit content, change writing tone, create images, generate audio, easily translate text into other languages, quickly summarize hours-long YouTube videos, generate memes, and more. Worth checking out for anyone who creates content of any kind.
Creating audio content can be a daunting task. Recording, editing, and refining audio often demands more time than entrepreneurs can spare. Imagine being able to produce human-like audio effortlessly with just a single click. ElevenLabs makes that a reality by allowing you to generate unbelievably high quality audio efficiently and cost-effectively in 29 languages. A game changer for content creators aiming to increase efficiency and reach a global audience through multilingual content. To see ElevenLabs in action, check out this super fun clip (not by ElevenLabs) that an AI researcher put together where he has Open AI’s GPT-4 describe a scene and then uses ElevenLabs to voice the narration in the unmistakable tones of Sir David Attenborough. Just wild. Anyway, you can try ElevenLabs out for free here.