DNS Server linuxLinux serverNETWORK ADMINISTRATIONS

the FrankeNAS – (Raspberry Pi, Zima Board, Dell Server, Ugreen) // a CEPH Tutorial

Learn the skills to make IT your job: 30% off your first month/first year of any ITPro personal plan with Code “CHUCK30” – https://ntck.co/itprotv

NetworkChuck is building a FrankeNAS using CEPH, an open-source software-defined storage system. Learn how to turn a pile of old hardware into a powerful, scalable storage cluster that rivals enterprise solutions. This video covers Ceph architecture, installation, and configuration, demonstrating how to create a flexible, fault-tolerant storage system using a mix of devices. Whether you’re a home lab enthusiast or IT professional, discover the power of decentralized storage and how Ceph can revolutionize your data management approach.

Commands/Walkthrough: https://ntck.co/403docs

🔥🔥Join the NetworkChuck Academy!: https://ntck.co/NCAcademy

**Sponsored by ACI Learning, the provider of ITPro

**00:00** – Intro: the FrankeNASS
**00:45** – Why you NEED CEPH
**04:45** – How CEPH works
**19:55** – TUTORIAL – CEPH Cluster Setup

SUPPORT NETWORKCHUCK
—————————————————
➡️NetworkChuck membership: https://ntck.co/Premium
☕☕ COFFEE and MERCH: https://ntck.co/coffee

Check out my new channel: https://ntck.co/ncclips

🆘🆘NEED HELP?? Join the Discord Server: https://discord.gg/networkchuck

STUDY WITH ME on Twitch: https://bit.ly/nc_twitch

READY TO LEARN??
—————————————————
-Learn Python: https://bit.ly/3rzZjzz
-Get your CCNA: https://bit.ly/nc-ccna

FOLLOW ME EVERYWHERE
—————————————————
Instagram: https://www.instagram.com/networkchuck/
Twitter: https://twitter.com/networkchuck
Facebook: https://www.facebook.com/NetworkChuck/
Join the Discord server: http://bit.ly/nc-discord

AFFILIATES & REFERRALS
—————————————————
(GEAR I USE…STUFF I RECOMMEND)
My network gear: https://geni.us/L6wyIUj
Amazon Affiliate Store: https://www.amazon.com/shop/networkchuck
Buy a Raspberry Pi: https://geni.us/aBeqAL
Do you want to know how I draw on the screen?? Go to https://ntck.co/EpicPen and use code NetworkChuck to get 20% off!!
fast and reliable unifi in the cloud: https://hostifi.com/?via=chuck

– NetworkChuck NAS build tutorial
– How to build a NAS with Ceph
– DIY NAS using old hardware
– Ceph software-defined storage guide
– Transforming old laptops into a NAS
– Building a storage cluster with Ceph
– Ceph cluster setup tutorial
– Using Raspberry Pi for NAS
– Creating SMB shares on Ceph
– Mounting Ceph on Linux
– Decentralized storage solutions
– Open-source NAS setup
– Ceph storage for tech enthusiasts
– DIY tech projects with NetworkChuck
– Advanced NAS configuration
– CephFS file system tutorial
– Managing storage with Ceph
– Ceph and Proxmox integration
– High availability storage with Ceph
– Setting up a storage pool in Ceph
– Ceph OSD configuration
– Software-defined storage explained
– Building a scalable NAS
– Tech tips for home servers

#ceph #45drives #nas

source

by NetworkChuck

linux dns server

47 thoughts on “the FrankeNAS – (Raspberry Pi, Zima Board, Dell Server, Ugreen) // a CEPH Tutorial

  • Learn the skills to make IT your job: 30% off your first month/first year of any ITPro personal plan with Code “CHUCK30” – https://ntck.co/itprotv

    NetworkChuck is building a FrankeNAS using CEPH, an open-source software-defined storage system. Learn how to turn a pile of old hardware into a powerful, scalable storage cluster that rivals enterprise solutions. This video covers Ceph architecture, installation, and configuration, demonstrating how to create a flexible, fault-tolerant storage system using a mix of devices. Whether you're a home lab enthusiast or IT professional, discover the power of decentralized storage and how Ceph can revolutionize your data management approach.

    Commands/Walkthrough: https://ntck.co/403docs

    🔥🔥Join the NetworkChuck Academy!: https://ntck.co/NCAcademy

    *00:00* – Intro: the FrankeNASS
    *00:45* – Why you NEED CEPH
    *04:45* – How CEPH works
    *19:55* – TUTORIAL – CEPH Cluster Setup

  • Grate stuff, please do more on Ceph , can’t believe I watched more than 40 minutes one shot.

  • It's just CEPH you either feel it in your squanch or you don't

  • thank you for this video great!
    i would appreciate a video now, what would happen when your first manager goes down? how to handle that? do you ned to install cephadm on the next manager or ist it installed?
    some real-life scenarios would be nice, or what happens when some drives going to down?

    thx!! you're great!

  • Lovely to see more videos about Ceph. I found it when we ran out of space on our production environment. Had dabbled with it a bit at home but I had to put it into production like yesterday so that was a herowing week at work. Now 2 years later we have learned a lot and I made a bunch of videos on the topic, and it's still rock solid.

  • please make a proxmox version of this and ceph!!

  • This was awesome and has me thinking about reconfiguring my current setup. Can you go over in a future video on if there's any possibility on setting up docker containers for Plex, etc and how you would swap a defective hard drive?

  • Part 2 suggestion… integration with ProxMox

  • Not badmouthing the technology, but thankfully I don’t need this. Seems way more involved than I want to deal with.

  • Excellent Video — Superimpressed but I am still not clear on
    a. snapshot and snapshot schedules [ snapshots are must or else one Ransomeware attack will take everything ]
    b. osd recovery process if osd dies
    c. can any system which is a part of ceph storage cluster, can be removed, if needed ?
    d. is it okay to run a ceph cluster on 1 gb network ?

  • I've actually wanted to learn CEPH!!!

  • I use ceph at work, I honestly didn't know you could map ceph directly in Linux remotely? I've always used SMB/NFS.

    So, to pay it forward, one or two things I've learned over the years:

    1. Power Loss Protection or PLP is a game changer on SSD. TLDR, It allows the SSD to "say" it finished copying when it actually hasn't. This has a massive affect on speed.

    2. If you set it up properly, you can have a "third" speed teir. So not just SSD/HDD, but the ability to WRITE to SSD and STORE the data on HDD much like read/write cache in normal ZFS.

    Thank you @networkchuck

  • part 2 part 3 more more more this is just what I need to use for my set-up at home. enjoy the content

  • I'd love to see a follow up video on using all the same hardware for services that utilize the storage. Webserver on one, media on another, etc. or are all these hosts now locked down to being only storage?

  • This is awesome!!! But where do I offload all my data so I can reformat all my disks? O o

  • ceph can deploy distribute smb containers for you on your cluster, making any node a distributed smb server with access to the ceph filesystem – which also means that you can use all capable nodes to host an smb container (on which is distributed access creds and filesystems to export via smb), and then you can use dns to do round-robin load balancing if you had a team of folks who needed to work from the ceph filesystem you've created over samba (depending on your scaling needs.) ceph can be used on anything from 3 raspberry pi's with external hard drives to multiple sites with full datacenters full of hard drives and ssd's with different crush map policies to keep data fast, locally accessible, and far-flung distributed. HOWEVER, this is NOT without cost. ceph is REALLY REALLY RIDICULOUSLY REALLY REALLY REALLY SLOW for writes in particular, because it has to write as many copies as you scope before each block commits. how much slower depending on how fast your storage is, how many parts the file is broken up into, and how fast your backend network is to deliver that data to it's clients. caching and ssd tiering can help, but even that's pretty darned slow in my testing. I have a 5-node proxmox cluster (i7-8700, 64gb ram each with dual 10g network cards lagg'd to a fast 10g switch, a boot ssd, and an osd ssd (2tb crucial p3 plus)) I wanted to put all my vm storage on ceph (which proxmox supports rados block devices (RBD) directly which is pretty cool). I did A/B testing on the same exactly drives and equipment (before and after ceph) and found that under my testing parameters that ceph is 1/10th the performance of just using the storage natively. here are screenshots of my tests https://imgur.com/a/Srul5qT

  • question if you get ransomware on one drive would it be replicated to the other drives.

  • Assistant to the regional manager, eh? Not assistant regional manager?

  • Teaches practical skills better than most college professors.

  • I have 4 storage clustered ceph, it's huge… Now, how do to backuo😂

  • I will not replace TrueNAS with CEPH
    I will not replace TrueNAS with CEPH
    I will not replace TrueNAS with CEPH

  • ubuntu 24.04 lts is the current version as of this video. is ceph broken on that?

  • Hey Network Chuck,

    I'm a cybersecurity tester, and I have to say your videos are spot on. It's great to see someone spreading accurate information, unlike many YouTubers nowadays who often share misleading content. I really appreciate your in-depth content on cybersecurity.

    I have a couple of questions for you: Have you ever considered making a video on how to reverse the connection to a scammer's PC? I know it might be outside your usual scope, but I'm curious about your thoughts on this. Also, what is your operating system of choice for cybersecurity work? Mine is Linux.

Comments are closed.