Home page:
Brought to you by you:
Additional funding provided by Amplify Partners
For any early-stage ML entrepreneurs, Amplify would love to hear from you: 3blue1brown@amplifypartners.com
Full playlist:
Typo correction: At 14:45, the last index on the bias vector is n, when it’s supposed to in fact be a k. Thanks for the sharp eyes that caught that!
For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning:
There are two neat things about this book. First, it’s available for free, so consider joining me in making a donation Nielsen’s way if you get something out of it. And second, it’s centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!
I also highly recommend Chris Olah’s blog:
For more videos, Welch Labs also has some great series on machine learning:
For those of you looking to go *even* deeper, check out the text “Deep Learning” by Goodfellow, Bengio, and Courville.
Also, the publication Distill is just utterly beautiful:
Lion photo by Kevin Pluck
——————
Animations largely made using manim, a scrappy open source python library.
If you want to check it out, I feel compelled to warn you that it’s not the most well-documented tool, and has many other quirks you might expect in a library someone wrote with only their own use in mind.
Music by Vincent Rubinetti.
Download the music on Bandcamp:
Stream the music on Spotify:
If you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then “add subtitles/cc”. I really appreciate those who do this, as it helps make the lessons accessible to more people.
——————
3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with YouTube, if you want to stay posted on new videos, subscribe, and click the bell to receive notifications (if you’re into that).
If you are new to this channel and want to see more, a good place to start is this playlist:
Various social media stuffs:
Website:
Twitter:
Patreon:
Facebook:
Reddit:
Nguồn:https://quydinh.com/
Xem Thêm Bài Viết Khác:https://quydinh.com/suc-khoe
14:38, shouldn't b be k-dimensional? Not n-dimensional?
Also, very excited to have understood enough to find a flaw
Fantastic explanation 🙂
That moment when you realised you created neural networks without knowing what they really were…
That sad part is that this is pretty much deep learning resarch. "So the one thing didnt work at some point, so they tried this other thing and for whatever reasons it happended to work. Now everybody is using this new thing."
woah… so much just clicked for me.
This might sound crazy, but I want to start referencing YouTube videos, such as this one, within academic writing. This would, however, require less informal language such as "heck" xD. Contact me if you might be interested in re-creating videos, such as this, in an academically acceptable format, then we could start the inevitable moving of the academic space towards videos.
Ohm my gosh you should try this application! Arrive at: androidcircuitsolver/app.html
thank you verry much .
i want to ask you about the programme that you use it for make your video with this animation effect ?
thank you again
I just want to say that you are 3b^2, because 3b1b =
3 * b * 1 * b =
3 * 1 * b * b =
3 * b * b =
3 * (b^2) =
3b^2
Hi, I'm yukta . I want to ask question that how can I a make software that detects fake videos or deepfake videos…how to solve this problem …what should i have to study to make this software…. And I'm beginner 🙂
That was awesome! may ask what applications do you use for making these animations?
i just want to say THANK YOU SO MUCH for this video. this really helped my understanding of neural networks.
10:57 an alternative way of thinking about it is how certain that neuron is that that region of pixels has that specific shape based on the weights assigned to each pixel there – if it's more certain, the number will be higher. if it's less certain, the number will be lower or negative.
❤❤❤❤
Hi Pi, Pi ,Pi, Pi and AI!
Nice presentation, you are very good orator, I never heard you breathe, do you even need air 🙂 I was able to focus so well on the content and not the presentation.
Thanks for sharing this great video!
Как понять сколько нужно слоев чтобы матрица вещественных чисел 28х28 могла распознавать числа от 0 до 9?
Как понять сколько нейронов должно быть в каждом слое, чтобы матрица действительных чисел 28х28 могла распознавать числа от 0 до 9?
How to understand the minimum number of layers needed to the matrix of real numbers 28×28 could recognize numbers from 0 to 9?
How to understand the minimum number of neurons should be in each layer so that the matrix of real numbers 28×28 could recognize numbers from 0 to 9?
왜 한글 자막 있다 없냐ㅠㅠ
I like living under a rock.
@3blue1Brown At 14:38 isn't matrix for bias supposed to be [k x 1]? instead of [n x 1]? I am not sure if I'm right but I think since there are k neurons at layer 1, the number of bias also should be k?
So inside one neuron, do each weight for each neuron have different values? Or the same weight value for each input in one neuron?
you are a God Gifted Teacher! please accept my respect master!!!
Random thought, imagine watching this when you have trypophobia
holy crap thank you so much for this video it helped so much
f that's the best subscribe request I ever seen at the end of a video: subscribe so the AI can take positive data, on a ai video, noice.
Can anyone help me understand the 9:50 explanation on weights depicting pixels?
Can’t believe they got Peter Gregory to narrate.
There is an error at the matrix indexes at 16:00, the last column index at first row is n and at other rows it's k, should be consistent
At 10:05 you say that adding the negative weights around to detect edges will increase the weighted sum, but surely as the activation value is between 0 and 1, it will decrease the weighted sum and thus affect which neurons are affected in the next layer?
Very informative and interesting.
Typo: At 14:40 the vector of biases that is added should be [b0…bk] and not [b0…bn] because the size of the matrix that you get from the multiplication is [k+1] x 1 and not [n+1] x 1.
Hi, I have a doubt. what is the difference between parameter a and w ?
Can you tell me about the hidden layer and the number of hidden layer depend upon??
I think there was a mistake at 14:42. The matrix for the bias should go from b0 to bk, not bn.
Awesome video regardless.
at 00:56, you probably said it wrong, it should be 0 to 9, not 0 to 10. Anyway that does change the concept though 😉