سرفصل های مهم
How to tackle the MNIST
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
How are we going to approach this image recognition problem.
Each image in the amnesty data set is 28 pixels by 28 pixels it’s on a gray scale so we can think about
the problem as a 28 by 28 matrix where input values are from zero to two hundred and fifty five zero
corresponds to purely black and 255 to purely white.
For example a handwritten seven and a matrix would look like this that’s an approximation.
But the idea is more or less the same now because all the images are of the same size a 28 by 28 photo
will have seven hundred and eighty four pixels
the approach for deep feed forward neural networks is to transform or flatten each image into a vector
of length 784
so for each image we would have seven hundred and eighty four inputs.
Each input corresponds to the intensity of the color of the corresponding pixel.
We will have seven hundred and eighty four input units in our input layer then we will linearly combine
them and add a non linearity to get the first hidden layer for our example.
We will build a model with two hidden layers two hidden layers are enough to produce a model with very
good accuracy.
Finally we will produce the output layer.
There are 10 digits so 10 classes.
Therefore we will have 10 output units in the output layer the output will then be compared to the targets
it will use one hot encoding for both the outputs and the targets.
For example the digit 0 will be represented by this vector while the digit 5 by that one OK.
Since we would like to see the probability of a digit being rightfully labeled we will use a soft Max
activation function for the output layer.
It is easy to talk about these problems.
Now that you’ve been through the course isn’t it so let me walk you through the action plan.
You will quickly notice our curriculum so far has covered everything you need.
First we must prepare our data and pre process it a bit.
We will create training validation and test data sets as well as select the batch size.
Second we must outline the model and choose the activation functions we want to employ.
Third we must set the appropriate advanced optimizer is and the loss function.
Fourth we will make it learn the algorithm will back propagate its way to accuracy at each epoch.
We will validate.
Finally we will test the accuracy of the model regarding the test dataset.
All right let’s get to it.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.