See how much you have learned

دوره: یادگیری عمیق با TensorFlow / فصل: Conclusion / درس 1

See how much you have learned

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

متن انگلیسی درس

Sadly it is time for the conclusion of this course is only right to summarize everything we have seen.

Once you have completed the course you can clearly see where everything fits and how much you have learned

along the way.

We started from the point where we had no idea what machine learning is or how it functions.

First we introduce the fundamentals of machine learning.

They were four ingredients data model objective function an optimization algorithm.

They were quickly followed by the minimal example.

We generated fake data and found the underlying relationship.

That was our first major practical example.

We proved that machine learning is useful immediately afterwards.

We explore the same problem but this time through the tensor flow framework while still in the early

stages of course these topics provided us with a solid foundation of what was to come.

Then we went deeper.

We extended our simple linear structure into a complex nonlinear one through activation functions.

We saw the most common of them and supposedly you practice on the algorithms we built.

We also spend a significant portion of time on back propagation and it’s mathematical justification

as it is the essence of the optimization process.

Then we continued by learning about overfitting.

For me personally the concept of overfitting is one of the most interesting topics.

Afterwards we saw and implemented some common strategies to spot the issue of overfitting through creation

of the validation and test data sets.

Finally we concluded the section with some useful early stopping techniques and emphasized the fact

that we should not only split our data set but also apply early stopping when necessary.

Next we went through a short section about the initialization of variables.

We back this up with academic research and continued on to optimizers.

We should remember two important takeaways from this section.

The first one relates to batching and the incredible speed improvement it gives us through the many

batch gradient descent.

The second takeaway was that the learning rate is equally important but probably a bit harder to come

up with after upgrading it with learning schedules and momentum.

We reached the state of the art technique the adaptive moment estimation known as atom our final theoretical

section upon pre-processing and focus on one hot and binary encoding as each classification problem

requires some encoding.

Finally we went through two practical examples.

The edness data set was introduced as the Mussey machine learning problem.

We created some layers built with the hyper parameters and optimize the algorithm to get quite impressive

results along the way.

We implemented almost all the topics seen in the course.

Only then where are we ready to conclude with the business case which was the true test for the amount

of knowledge you acquired throughout the course.

Our example was as real life as it gets.

We derive the very valuable insight about the company in question and realized that while much better

than traditional prediction methods even machine learning cannot account for 100 percent of human randomness

and client behavior.

We would like to thank you for taking the course and congratulate you for completing it as a token of

appreciation.

We will take several lessons to introduce you to what’s further out there in the machine learning world

and what you should look for in order to further improve and broaden your skills.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.