سرفصل های مهم
An overview of non-NN approaches
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
In order to keep our word we must give a quick overview of non neural networks machine learning algorithms.
Jesse you have an idea of their existence.
Throughout the whole course we consider discriminative models a discriminative model is one that uses
an input and then provides a probability of whether an output is correct.
Another instance of discriminative models are random for us.
They are based on decision trees and this is why they are called forests.
The main idea is that decision trees are not so good at classifying as they tend to overfit a lot.
A random forest takes many decision trees and makes the point that many bad classifiers equal a good
classifier.
That said Random forests are mainly used for classification OK.
There are also generative models that don’t give an output y given x.
The target is actually the joint probability distribution of x and y which carries more information.
It goes from inputs to outputs and then outputs to inputs.
That’s useful for problems such as translations.
If you have an English to Mandarin translator you’ll probably want it to work for Mandarin English as
well or at least you are going to check if it’s working.
Without going into detail examples are hidden Markov Models Hidden Markov models assume that the problem
at hand is a mark off process.
Broadly speaking this implies we can predict the future based on the present just as well as if we had
the whole history of the process that in a way is opposite to what we’ve done so far.
Then we have Bayesian networks as everything that has the baby’s name in it.
These models take into account prior probabilities.
The difference between a neural network and the Bayesian network is that with Bayesian networks probabilities
of an event occurring are used the models inputs in a neural network.
Each input alone doesn’t mean much but a model train on many many inputs gives amazing insights.
Bayesian networks are quite useful when we have some uncertainty.
For example in medicine a person may have a certain disease with some symptoms manifested while another
may have the same disease but with completely different symptoms manifested.
This is a situation where we have conflicting information a neural network would be confused as there
would be no trend to be found between the two patients while the Beijing network would assume that such
a case is not unusual.
That said researchers create neural networks that are designed to reflect such problems with considerable
success.
We believe this was a good overview of what else we have out there.
Neural networks are a great place to start exploring the machine learning universe.
Moreover it seems like they are advancing at a much faster pace than other types of machine learning.
It’s worth saying that they are so fascinatingly good at predicting that we humans are not quite sure
why they beat every other type of analysis we’ve invented so far.
Research is making way but so far no one truly knows why these structures solve problems with such high
accuracy However not being able to understand them completely.
Doesn’t mean we can’t assess them and the verdict is 10 out of 10.
This course was a great journey and thus we’d like to finish with some food for thought.
Each problem no matter what an image a business problem or a Shakespeare sonnet tells a story with the
right tools you can reveal it and take advantage of it.
Now that you have completed this course you have acquired some very valuable tools and it’s up to you
to find and tell the right story.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.