سرفصل های مهم
Cross-entropy loss
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
Hi and welcome back what about classification.
We discussed that the output of a regression is a number but for classification things are different
since the outputs are categories like cats and dogs.
We need a better suited strategy.
The most common loss function used for classification is cross entropy and it is defined as l of y and
t equals minus the sum of the targets times the natural log of the outputs.
Time for an example before I lose your interest.
Let’s consider our cats and dogs problem.
This time we will have a third category horse.
Here’s an image labeled as dog.
The label is the target.
But how does it look in numerical terms.
Well the target vector t for this photo would be 0 1 0 0 means it is not a cat.
The one shows it is a dog and the third 0 indicates it is not a horse.
OK let’s examine a different image.
This time it will be labeled horse.
Its target vector is 0 0 1.
Imagine the outputs of our model for these two images are 0.4 0.4 0.2 for the first image and 0.1 0.2
0.7 for the second.
After some machine learning transformations these vectors show the probabilities for each photo to be
a cat a dog or a horse.
We will learn how to create these vectors later in the course.
For now we just need to know how to interpret them.
The first vector shows that according to our algorithm there is a 0.4 or a 40 percent chance that the
first photo is a cat 40 percent.
It is a dog and 20 percent it is a horse.
So that’s the interpretation of these vectors.
What about the cross entropy of each photo.
The cross entropy loss for the first image is mine is zero times natural log of 0.4 minus one times
natural log of 0.4 minus zero times natural log of 0.2 this equals approximately 0.9 to the cross entropy
loss.
The second image is mine is 0 times natural log of 0.1 minus zero times the natural log of 0.2 minus
one times the natural log of 0.7 which equals approximately 0.3 6 as we already know the lower the last
function or the cross entry in this case the more accurate the model.
So what’s the meaning of these two cross entropies.
They show the second losses lower.
Therefore its prediction is superior.
This is what we expected for the first image.
The model was not sure if the photo was of a dog or a cat.
There was an equal 40 percent probability for both options.
We can oppose this to the second photo where the model was 70 percent sure it was a horse.
Thus the cross entropy was lower.
OK.
And important know is that with classification our target vectors consist of a bunch of zeros and a
one which indicates the correct category.
Therefore we could simplify the above formulas to minus the log of the probability of the output for
the correct answer.
Here’s an illustration of how our initial formulas would change.
All right those were examples of commonly used loss functions for regression and classification.
Most regression and classification problems are solved by using them but there are other loss functions
that can help us resolve a problem.
We must emphasize that any function that holds the basic property of being higher for worse results
and lower for better results can be a loss function.
We will often use this observation when coding.
It will all become clear when we see them in action.
That’s all for now.
Thanks for watching.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.