Local AI Just Got Easy (and Cheap)
This Google TPU makes local AI simple…
Full Blog Tutorial: https://medium.com/@timothydmoody/coral-edge-tpu-home-lab-frigate-budget-surveillance-system-7e26acb834e4
Explore the groundbreaking Coral AI mini PCIe accelerator by Google, a game-changer for home automation and DIY projects, in my latest video where I integrate this innovative chip with the ZimaBoard and Zima Blade for superior performance and cost-effectiveness. Discover how this setup outperforms others, like the Raspberry Pi 5, in speed and thermal efficiency, and follow my journey from troubleshooting software issues to successfully running Frigate, an advanced home lab computer vision system. Learn how this affordable, under $100 setup can revolutionize your home tech projects!
Monitor your security cameras with locally processed AI
Frigate is an open source NVR built around real-time AI object detection. All processing is performed locally on your own hardware, and your camera feeds never leave your home.
https://coral.ai/products/m2-accelerator-ae
https://frigate.video/
https://mqtt.org/
ip address
Absolutely mindless tutorial to do….what? Lmao like thanks for showing us…this
"just"? its been available for years 😅
The Coral TPU m.2 / mini-PCIe version itself is PCIe 2.0 x1 aka 500MB/s, so theoretically it's "slower" than the USB3 version in every case. Obviously the answer isn't as simple as that, but saying the PCIe version is faster because USB3 bandwith is only 625MB/s isn't the right answer either. You have to take into account various overheads, driver optimizations, etc…
Would this work on the orange pi 5 plus
That the Coral TPU only works with older Python version is a big disadvantage… I struggle with it also, since I am very accustomed to using new functionality of python 3.10-3.12
Got it working as a HA add-on, smokes my Amcrest doorbell sensor. "Chihuahua Pitbull mix detected, 109% certainty". Seriously though, wondering when GPU support will work.
Sorry, why can't a GPU in your PC do it?
So how do you use this for AI? Because all I see right now is a very hard to setup RING camera.
Why did you use 1 unit when you can use at least 5 pci-e units?
Coral is basically dead. Have not received updates for several years, stuck on python 3.9 and older TFLite.
Sir can u plz tell me how can i inspect Netflix api traffic
"easy"
recording only when there is a person with security cameras? metal gear solid told me how to deal with it hehe (card box)
Would you consider testing the youyeetoo X1. intel 11th based CPUN5105 SBC, great for Coral AI.😀
God man…the only thing i can think of is A.i exo-skeletion with electrode sensors running up the muscles to help product to wearer's movement 😅
Raspberry 5 is shit garbage
Love the video! The audio is a little wonky at points. Loud and then quiet then REALLY loud then quiet. Hurt my ears with my earbuds in. Thanks for the info!
What is warp? He used it for remote tasks.
You do realize that the Raspberry Pi 5 PCIe is only PCIe 2.0 and x1 lane right? That has a maximum theoretical throughput of 500 MB/s, which is less than the USB version of the Coral TPU… So when using a Pi 5, the USB Coral would absolutely be the better choice.
it is crap not AI… it can do nothing … fpga… any fpga cheaper can do more that this crap.
Frigate has go2rtc rtsp server built-in. Much better than rtmpsimpleserver
1:42 meowwwww
So much room for activities
I have heard of at least one graphics card manufacturer installing an NVME drive on the back of a GPU, and now knowing that an NVME AI accelerator exists, I wonder if either of the graphics card companies would include code to directly leverage an installed AI accelerator for their accelerated GPU features.
Asus is selling a card for PCIE 16x with 8 modules for 1380 bucks. Considering that the included modules have a total cost of 200, that is a wildly mad rip-off scheme.
Correction: The chips you put on a card even cost 20 instead of 25 which makes the total cost of the chips only 160 bucks. Even greater rip-off by Asus.
just a reminder, blur is not destructive. use black/white or colour blocks if you want to censor
Conda (anaconda or miniconda) resolves python dependency issues by creating a separate environment with the specified version of python, and isolates the environment along with anything loaded into the environment related to Python (ie if you install with pip or conda, it stays with the environment and leaves the systems python alone)
This allows me to use alpha versions of ubuntu with any python programs i want by having separated environments for each program
Hi there, nice video! I really thought these things were awfully more expensive than they actually are.
However, what's your take on using these things for larger models? I mean, it's my understanding these things do NOT have their own memory, so they'd be sequentially exchanging data with the main computer memory for each (or every few) operations, so they are BOUND to be as slow and high latency, having only those PCIe lanes available, aren't they?
Say i'd like to run even just a 7b llama, or something in the low hundrends of mb model, like MarianMT translation models, this would be effectively useless, at least for real time processing wouldn't it?
Also, and this could be a real bonus for this little things, can they be used simultaneously for running any 1 model?
I mean could I eventually plug 3 or 4 of these things in one single machine and use them for inference of a given model at 12 or 16 TOPS?
The content is interesting but I couldn't get past the sound issues way too. Echoey
One step closer to Skynet