## What you would learn in Grokking Machine Learning, video edition course?

Do you think that machine learning is complicated to grasp? It's not! Check out this book! Serrano exposes some of the most cherished secrets of a machine learning society. Sebastian Theron, Founder, Udacity

Discover powerful machine learning methods you can apply and understand with just math from high school.

Learn more about Grokking Machine Learning. You will be taught:

- Supervised algorithms to classify and separate data
- Methods to clean and simplify data
- Machine learning tools and packages
- Neural networks and Ensemble methods for extensive data

Grokking Machine Learning will teach students how to implement machine learning to your project using basic Python codes and school math. It doesn't require any expertise to complete the hands-on activities with Python and easily accessible machine learning software. With easy-to-follow Python-based activities and mini-projects guides you towards becoming a machine-learning specialist.

#### About the technology

Discover the most effective machine learning techniques you can apply and understand with only math from high school! Put the term machine learning refers to a collection of methods for data analysis that is based on algorithms that produce superior results when you provide them with more information. ML is the basis of many advanced technologies, including recommendation systems, facial recognition software, smart speakers, and even self-driving vehicles. This book explains the fundamental concepts of machine learning by providing a range of relatable examples with engaging exercises and clear illustrations.

#### About the book

Grokking Machine Learning introduces machine learning algorithms and techniques in a manner everyone can comprehend. This book steers clear of the confusing academic jargon and provides simple explanations based on the basics of algebra. As you learn, you'll develop exciting projects using Python and algorithms for spam detection and image recognition. Additionally, you'll learn practical techniques for cleaning and preparing data.

#### about the audience

There is no need for machine learning expertise. However, basic Python is required.

### Course Content:

- Chapter 1. What is machine learning? It is common sense, except done by a computer.
- Chapter 1. What is machine learning?
- Chapter 1. Some examples of models that humans use
- Chapter 1. Example 4: More?
- Chapter 2. Types of machine learning
- Chapter 2. Supervised learning: The branch of machine learning that works with labeled data
- Chapter 2. Unsupervised learning: The branch of machine learning that works with unlabeled data
- Chapter 2. Dimensionality reduction simplifies data without losing too much information
- Chapter 2. What is reinforcement learning?
- Chapter 3. Drawing a line close to our points: Linear regression
- Chapter 3. The remember step: Looking at the prices of existing houses
- Chapter 3. Some questions that arise and some quick answers
- Chapter 3. Crash course on slope and y-intercept
- Chapter 3. Simple trick
- Chapter 3. The linear regression algorithm: Repeating the absolute or square trick many times to move the line closer to the points
- Chapter 3. How do we measure our results? The error function
- Chapter 3. Gradient descent: How to decrease an error function by slowly descending from a mountain
- Chapter 3. Real-life application: Using Turi Create to predict housing prices in India
- Chapter 3. Parameters and hyperparameters
- Chapter 4. Optimizing the training process: Underfitting, overfitting, testing, and regularization
- Chapter 4. How do we get the computer to pick the suitable model? By testing
- Chapter 4. A numerical way to decide how complex our model should be: The model complexity graph
- Chapter 4. Another example of overfitting: Movie recommendations
- Chapter 4. Modifying the error function to solve our problem: Lasso regression and ridge regression
- Chapter 4. An intuitive way to see regularization
- Chapter 4. Polynomial regression, testing, and regularization with Turi Create
- Chapter 4. Polynomial regression, testing, and regularization with Turi Create : The testing RMSE for the models follow:
- Chapter 5. Using lines to split our points: The perceptron algorithm
- Chapter 5. The problem: We are on an alien planet, and we don’t know their language!
- Chapter 5. Sentiment analysis classifier
- Chapter 5. The step function and activation functions: A condensed way to get predictions
- Chapter 5. The bias, the y-intercept, and the inherent mood of a quiet alien
- Chapter 5. Error function 3: Score
- Chapter 5. Pseudocode for the perceptron trick (geometric)
- Chapter 5. Bad classifier
- Chapter 5. Pseudocode for the perceptron algorithm
- Chapter 5. Coding the perceptron algorithm using Turi Create
- Chapter 6. A continuous approach to splitting points: Logistic classifiers
- Chapter 6. The dataset and the predictions
- Chapter 6. Error function 3: log loss
- Chapter 6. The formula for the log loss
- Chapter 6. Pseudocode for the logistic trick
- Chapter 6. Coding the logistic regression algorithm
- Chapter 6. Classifying into multiple classes: The softmax function
- Chapter 7. How do you measure classification models? Accuracy and its friends
- Chapter 7. False positives and false negatives: Which one is worse?
- Chapter 7. Recall: Among the positive examples, how many did we correctly classify?
- Chapter 7. Combining recall and precision as a way to optimize both: The F-score
- Chapter 7. A valuable tool to evaluate our model: The receiver operating characteristic (ROC) curve
- Chapter 7. The receiver operating characteristic (ROC) curve: A way to optimize sensitivity and specificity in a model
- Chapter 7. A metric that tells us how good our model is: The AUC (area under the curve)
- Chapter 7. The recall is sensitivity, but precision and specificity are different
- Chapter 7. Summary
- Chapter 8. Using probability to its maximum: The naive Bayes model
- Chapter 8. Sick or healthy? A story with Bayes’ theorem as the hero: Let’s calculate this probability.
- Chapter 8. Prelude to Bayes’ theorem: The prior, the event, and the posterior
- Chapter 8. What the math just happened? Turning ratios into probabilities
- Chapter 8. What the math just happened? Turning ratios into probabilities: Product rule of probabilities
- Chapter 8. What about two words? The naive Bayes algorithm
- Chapter 8. What about more than two words?
- Chapter 8. Implementing the naive Bayes algorithm
- Chapter 9. Splitting data by asking questions: Decision trees
- Chapter 9. Picking an excellent first question
- Chapter 9. The solution: Building an app-recommendation system
- Chapter 9. Gini impurity index: How diverse is my dataset?
- Chapter 9. Entropy: Another measure of diversity with solid applications in information theory
- Chapter 9. Classes of different sizes? No problem: We can take weighted averages
- Chapter 9. Beyond questions like yes/no
- Chapter 9. The graphical boundary of decision trees
- Chapter 9. Setting hyperparameters in Scikit-Learn
- Chapter 9. Applications
- Chapter 10. Combining building blocks to gain more power: Neural networks
- Chapter 10. Why two lines? Is happiness not linear?
- Chapter 10. The boundary of a neural network
- Chapter 10. Potential problems: From overfitting to vanishing gradients
- Chapter 10. Neural networks with more than one output: The softmax function
- Chapter 10. Training the model
- Chapter 10. Other architectures for more complex datasets
- Chapter 10. How neural networks paint paintings: Generative adversarial networks (GAN)
- Chapter 11. Finding boundaries with style: Support vector machines and the kernel method
- Chapter 11. Distance error function: Trying to separate our two lines as far apart as possible
- Chapter 11. Training SVMs with nonlinear boundaries: The kernel method
- Chapter 11. Going beyond quadratic equations: The polynomial kernel
- Chapter 11. A measure of how close points are: Similarity
- Chapter 11. Overfitting and underfitting with the RBF kernel: The gamma parameter
- Chapter 12. Combining models to maximize results: Ensemble learning
- Chapter 12. Fitting a random forest manually
- Chapter 12. Combining the weak learners into a strong learners
- Chapter 12. Gradient boosting: Using decision trees to build strong learners
- Chapter 12. XGBoost similarity score: A new and effective way to measure similarity in a set
- Chapter 12. Building the weak learners: Split at 25
- Chapter 12. Tree pruning: A way to reduce overfitting by simplifying the weak learners
- Chapter 13. Putting it all in practice: A real-life example of data engineering and machine learning
- Chapter 13. Using Pandas to study our dataset
- Chapter 13. Turning categorical data into numerical data: One-hot encoding
- Chapter 13. Feature selection: Getting rid of unnecessary features
- Chapter 13. Testing each model’s accuracy
- Chapter 13. Tuning the hyperparameters to find the best model: Grid search

## Download Grokking Machine Learning, video edition from below links NOW!

You are replying to :

##### Access Permission Error

####

You do not have access to this product!

**Dear User!**

To download this file(s) you need to **purchase this product** or **subscribe to one of our VIP plans.**

Note

Download speed is **limited**, for download with higher speed (2X) please register on the site and for download with MAXIMUM speed please join to our VIP plans.