Nvidia CUDA in 100 Seconds
What is CUDA? And how does parallel computing on the GPU enable developers to unlock the full potential of AI? Learn the basics of Nvidia CUDA programming in this quick tutorial.
Sponsor Disclaimer: I was not paid to make this video, but Nvidia did hook me up with an RTX4090
#programming #gpu #100secondsofcode
💬 Chat with Me on Discord
https://discord.gg/fireship
🔗 Resources
CUDA https://nvda.ws/3SF2OCU
GTC https://nvda.ws/3uDuKzj
CPU vs GPU https://youtu.be/r5NQecwZs1A
🔖 Topics Covered
– How does CUDA work?
– CUDA basics tutorial in C++
– Who invented CUDA?
– Difference between CPU and GPU
– CUDA quickstart
– How deep neural networks compute in parallel
– AI programming concepts
– How does a GPU work?
by Fireship
linux web server
Shoutout to Nvidia for hooking me up with an RTX4090 to run the code in this video, get the CUDA toolkit here https://nvda.ws/3SF2OCU
1:41 computers are amazing. What’s a complier?
this is C not C++
Has anyone else questioned how these are 100 seconds? I’m not complaining but the premise is a lie!!! Keep the lies coming Fireship!
Nice introductory video on GPUs and CUDA Programming. Check out https://www.youtube.com/playlist?list=PL1ysOEBe5977vlocXuRt6KBCYu_sdu1Ru for complete course on CUDA Programming. GPUs are transforming our world today with its applications ranging from Aritifcial Intelligence to Automobiles, simulations to weather predication an more.
If i have $1000 hiw much can i earn this starting amount?
CUDA is some cracking name 😂😂. Will the next generation of CUDA become CUDAI?
Computing
Unified
Device
Architecture
Intergrated
now you can invest 100 days to install it on ubuntu
❤
In short, parallel computing = faster
Gemini 1.5 Pro: This video is about Nvidia CUDA, a parallel computing platform that allows programmers to use the GPU for more than just playing video games.
The video starts by explaining what GPUs are typically used for. GPUs are used for computing graphics. When you play a game in 1080p at 60 FPS, there are over 2 million pixels on the screen that may need to be recalculated after every frame. This requires a lot of matrix multiplication and vector transformations, which GPUs are very good at. Unlike a CPU that has a few cores, a modern GPU like the RTX 4090 has over 16,000 cores. This makes GPUs perfect for tasks that can be parallelized.
Cuda allows developers to tap into the power of the GPU. Data scientists all around the world are using Cuda to train machine learning models. To use Cuda, you first need to write a function called a Cuda kernel that runs on the GPU. Then you copy some data from your main RAM to the GPU's memory. The CPU will then tell the GPU to execute that function or kernel in parallel. The code is executed in a block which itself organizes threads into a multi-dimensional grid. Then the final result from the GPU is copied back to the main memory.
The video then goes through an example of building a Cuda application. The first step is to install the Cuda toolkit which includes device drivers, a runtime compiler, and developer tools. The code is written in C++. The video shows an example of a function that adds two vectors together. Because billions of operations might be happening in parallel, the code needs to calculate the global index of the thread in the block that it's working on. The video also talks about using managed memory which can be accessed from both the CPU and the GPU.
The video then shows a main function for the CPU that runs the Cuda kernel. The code uses a for loop to initialize arrays with data and then passes this data to the function to run it on the GPU. The video also talks about configuring the Cuda kernel launch to control how many blocks and threads are used to run the code in parallel. This is important for optimizing multi-dimensional data structures like tensors used in deep learning. Finally, the code uses Cuda device synchronize to wait for the GPU to finish executing the code and then copies the data back to the host machine.
Great video
The more you buy, the more you save.
Eventually you save more than you spend in time, you return to the past.
#KARRAT business partner with #NVIDIA massive pump around the corner 💥💥💥💥 soon 50$ for one coin 🚀🚀🚀🚀🚀 #KARRAT project number one definitely soon massive blast 🌋🌋🌋🔥🔥🔥🔥🔥🔥🔥😱😱😱😱😱😱😱😱😱😱🎉🎉🎉🎉🎉❤
Dude! Less meth before your next video.
I can't believe humans come from hunting dinosaurs and live in caves and now some people have to deal with massive complicated math to bring technology to us
"cu" in brazilian portuguese is that hole… inside the ass kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Idk if this is a hot take or not. I know nvidia GPUs are super expensive… but if you look into just how much effort it takes to design and build one, both from a software and hardware side, it doesnt seem that wild to me really.
woooowww very helpful
🐐💚🖤
U are writing in c
instructions not clear, GPU is intel.
What's the game in 0:28 ?
Can I do deep learning with gtx 1650 ?
homsexuals deserve equal rights!!!!!!!!!!@#!@#
rip cuda