30 Things Everyone Should Know About Machine Learning

30-things-everyone-should-know-about-machine-learning

Machine learning is an energizing and disruptive power in the world of technology and small business owners particularly will want to be clear about the promises, outcomes, and even risks of this emerging technology. How about we stroll through the 30 most imperative things everyone should know about machine learning at this moment.

 

  1. Machine learning is about data and algorithms, yet for the most part data. There’s a considerable measure of fervor about advances in machine learning algorithms, and especially about deep learning. In any case, data is the key ingredient that makes machine learning conceivable. You can have machine learning without modern algorithms, however not without great data.                                                           
  2. Unless you have a lot of data, you should stick to basic models.Machine learning trains a model from patterns in your data, investigating a space of conceivable models defined by parameters. On the off chance that your parameter space is too big, you’ll overfit to your training data and train a model that doesn’t generalize beyond it. A point by point clarification requires more math, yet when in doubt you should keep your models as basic as could be allowed.                                                                                           
  3. Machine learning can only be as good as the data you use to train it.The phrase garbage in, garbage out” predates machine learning, however it relevantly aptly characterizes a key restriction of machine learning. Machine learning can only find patterns that are present in your training data. For supervised machine learning tasks like classification, you’ll require a vigorous collection of correctly labeled, richly featured training data.                                                                                                         
  4. Machine learning only works if your training data is representative.Just as a fund prospectus warns that “past performance is no guarantee of future results”, machine learning ought to caution that it’s only ensured to work for data generated by the same distribution that generated its training data. Be careful of skews between training data and production data, and retrain your models frequently so they don’t wind up plainly stale.                                                                                                                                            
  5. The vast majority of the hard work for machine learning is data transformation.From perusing the buildup about new machine learning procedures, you might imagine that machine learning is mostly about selecting and tuning algorithms. Actually more common: most of your time and effort goes into data cleansing and feature engineering— that is, changing raw features into features that better represent the signal in your data.                                                                                                                                  
  6. Deep learning is a revolutionary advance, but it isn’t a magic bullet.Deep learning has earned its hype by conveying advances across a broad scope of machine learning application zones. Moreover, deep learning automates some of the work traditionally performed through feature engineering, especially for image and video data. But deep learning isn’t a silver projectile. You can’t simply utilize it out of the container, regardless you’ll have to put huge exertion in data purifying and change.                                                                      
  7. Machine learning systems are highly vulnerable to operator error.With statements of regret to the NRA,Machine learning algorithms don’t kill people; people kill people.”  When machine learning systems fail, it’s rarely because of issues with the machine learning algorithm. More probable, you’ve introduced human mistake into the training data, creating bias or some other systematic blunder. Always be skeptical, and approach machine learning with the train you apply to software engineering.                                                 
  8. Machine learning can inadvertently make an inevitable prophecy.In numerous applications of machine learning, the choices you make today influence the training data you gather tomorrow. Once your machine learning system inserts biases into its model, it can continue generating new training data that strengthens those biases.  And a few biases can demolish people’s lives. Be mindful: don’t make unavoidable outcomes.                                                                                                                                                   
  9. AI is not going to become self-aware, rise up, and crush humanity. A astounding number of individuals appear to get their thoughts regarding about artificial intelligence from sci-fi movies. We ought to be motivated by sci-fi, but not so gullible that we mix up it for reality. There are sufficient genuine and present perils to stress over, from deliberately detestable individuals to unwittingly one-sided machine learning models. So you can quit stressing about SkyNet and “superintelligence”.                                                                      
  10. Machine learning means learning from data; AI is a buzzword.Machine learning satisfies the buildup: there are an amazing number of issues that you can tackle by giving the correct preparing information to the correct learning calculations. Call it AI if that encourages you offer it, yet realize that AI, in any event as utilized outside of the scholarly world, is frequently a trendy expression that can mean whatever individuals need it to mean.
  11. LEARNING = REPRESENTATION + EVALUATION + OPTIMIZATION

    Assume you have an application that you think machine learning may be useful for. The primary issue confronting you is the dumbfounding assortment of learning algos accessible. Which one to utilize? There are truly thousands accessible,what’s more, hundreds more are distributed every year. The way to not becoming mixed up in this gigantic space is to understand that it comprises of blends of only three segments.

  12. IT’S GENERALIZATION THAT MATTERS: The principal objective of machine learning is to sum up past the cases in the preparation set. This is on account of, regardless of how much information we have, it is far-fetched that we will see those correct illustrations again at test time. The most widely recognized mix-up among machine learning fledglings is to test on the preparation information and have the deception of progress. In the event that the picked classifier is then tried on new information, it is regularly no superior to anything irregular speculating. In this way, on the off chance that you contract somebody to construct a classifier, make sure to keep a few of the information to yourself and test the classifier they give you on it.        
  13. DATA ALONE IS NOT SUFFICIENT

    Generalization being the objective has another real result: data alone isn’t sufficient, regardless of its amount you have.

  14. OVERFITTING HAS MANY COUNTENANCES: Everybody in machine learning thinks about overfitting, however it comes in many structures that are not quickly self-evident. One approach to comprehend overfitting is by deteriorating gener-alization mistake into bias and variance.                                                  
  15. INTUITION FAILS IN HIGH MEASURERMENTS: Subsequent to overfitting, the most serious issue in machine learning is the scourge of dimensionality. Numerous algo-rithms that work fine in low measurements end up plainly unmanageable at the point when the information is high-dimensional. Be that as it may, in machine learn-ing it alludes to significantly more. Summing up accurately moves toward becoming exponentially harder as the dimensionality (number of fea-tures) of the cases develops, on the grounds that a settled size preparing set covers a lessening division of the info space.                                                            
  16. THEORETICAL GUARANTEES ARE NOT WHAT THEY APPEAR: Machine learning papers are loaded with hypothetical certifications. The most well-known sort is a bound on the quantity of ex-amples expected to guarantee great speculation. What ought to you make of these certifications? The primary part of hypothetical certifications in machine learning isn’t as a basis for handy choices, yet as a wellspring of understanding and main thrust for algorithm plan.                                                                                                                        
  17. FEATURE ENGINEERING IS THE KEY: Toward the day’s end, some machine learning projects succeed and some come up short. What has the effect? Effectively the most vital factor is the highlights utilized. On the off chance that you have numerous free highlights that each correspond well with the class, learning is simple. Then again, if the class is an exceptionally complex capacity of the highlights, you may not be ready to learn it. Frequently, the crude data isn’t in a shape that is agreeable to adapting, however you can develop highlights from it that are. This is commonly where the vast majority of the exertion in a machine learning venture goes. It is frequently likewise a standout amongst the most fascinating parts, where instinct, inventiveness and “dark craftsmanship” are as imperative as the specialized stuff.                                                  
  18. MORE DATA BEATS A CLEVERER ALGORITHM: Assume you’ve built the best arrangement of features you can, however the classifiers you’re getting are as yet not exact enough. What would you be able to do now? There are two principle decisions: plan a superior learning algorithm, or assemble more data. Machine learning scientists are essentially worried about the previous, yet even-mindedly the speediest way to progress is frequently to simply get more data.                                                                                                            
  19. LEARN MANY MODELS, NOT SIMPLY ONE: In the beginning of machine learning,Most exertion went into attempting numerous variations of it and choosing the best one. At that point precise empirical examinations demonstrated that the best student differs from application to application, and frameworks containing numerous different students began to show up. Exertion now went into attempting numerous varieties of numerous students, and as yet choosing just the best one.                                                                          
  20. SIMPLICITY DOES NOT INFER ACCURACY: In machine learning,given two classifiers with the same preparing blunder, the less complex of the two will probably have the most minimal test mistake. Indicated confirmations of this claim show up regularly in the writing, however in actuality there are some counter cases to it, and the “no free lunch” hypotheses suggest it can’t be valid.                                                                    
  21. REPRESENTABLE DOES NOT SUGGEST LEARNABLE Basically all representations used in variable-size learners have associated theorems of the form “Each function can be spoken to, or approximated discretionarily nearly, utilizing this portrayal.” Consoled by this, enthusiasts of the representation frequently continue to disregard all others. Be that as it may, just since a capacity can be spoken to does not mean it can be scholarly.                                                                                                                                                     
  22. CONNECTION DOES NOT IMPLY CAUSATION The goal of learning predictive models is to utilize them as advisers for action. On the off chance that we find that drink and snacks are often bought together at the grocery store,at that point maybe putting drink next to the snack section will increment deals. However shy of really doing the examination it’s hard to tell. Machine learning is usually applied to observational data, where the predictive variables are not under the control of the learner, as opposed to experimental data, where they are.                                                                                                                                                        
  23. Algorithms must be implemented before being used.                                                                              
  24. Machine learning is a subset of AI Machine learning is a practical form of artificial intelligence, and speaks the science of getting computers to inspire without being explicitly programmed.                                              
  25. Machine learning is all around us We see everyday cases of machine learning surrounding us and significant of them we underestimate for granted on a daily basis: tagging on Facebook, product recommendations from Amazon, Google’s page ranking system, or automatic spam filtering on Gmail are on the whole cases of machine learning.                                                                                                         
  26. Machine learning speaks to another worldview in computing In a recent paper on the subject, Pedro Domingos says it this way: “Machine learning algorithms can figure out how to perform important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming is not. As more data becomes available, more ambitious problems can be tackled. As a result, machine learning is widely used in computer science and other fields.”                                                                                                
  27. Machine learning is the center of ‘smart machine’ technologySmart machines are systems that utilize artificial intelligence and machine learning algorithms to make choices and tackle problems without human intercession. Smart machines are found in the accompanying applications: context aware devices, such as cell phones that can detect their physical condition and adjust their behavior accordingly; intelligent personal assistants like Google Now, Apple Siri;                                                                                                         
  28. Machine learning will significantly disturb the eventual fate of jobsResearch shows that numerous CEOs are underestimating the systemic and deep effect that smart machines will have through 2020, as well as the potential for them to replace millions of middle-class jobs in the decades to come.                                       
  29. Machine learning will make new learning opportunities: IT and business experts should keep their employments abilities pertinent and refreshed by guaranteeing they seek after capabilities and psychological errands that machines can’t touch. This will require progressing preparing and improvement in higher request aptitudes, for example, coding, statistics, visualization, linguistics, information management, and Big Data.            
  30. You can only learn machine learning by implementing algorithms.

 

REFERENCES

[1] E. Bauer and R. Kohavi. An empirical comparison of

voting classification algorithms: Bagging, boosting

and variants. Machine Learning, 36:105–142, 1999.

[2] Y. Bengio. Learning deep architectures for AI.

Foundations and Trends in Machine Learning,

2:1–127, 2009.

[3] Y. Benjamini and Y. Hochberg. Controlling the false

discovery rate: A practical and powerful approach to

multiple testing. Journal of the Royal Statistical

Society, Series B, 57:289–300, 1995.

[4] J. M. Bernardo and A. F. M. Smith. Bayesian Theory.

Wiley, New York, NY, 1994.

[5] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K.

Warmuth. Occam’s razor. Information Processing

Letters, 24:377–380, 1987.

[6] W. W. Cohen. Grammatically biased learning:

Learning logic programs using an explicit antecedent

description language. Artificial Intelligence,

68:303–366, 1994.

[7] P. Domingos. The role of Occam’s razor in knowledge

discovery. Data Mining and Knowledge Discovery,

3:409–425, 1999.

[8] P. Domingos. Bayesian averaging of classifiers and the

overfitting problem. In Proceedings of the Seventeenth

International Conference on Machine Learning, pages

223–230, Stanford, CA, 2000. Morgan Kaufmann.

[9] P. Domingos. A unified bias-variance decomposition

and its applications. In Proceedings of the Seventeenth

International Conference on Machine Learning, pages

231–238, Stanford, CA, 2000. Morgan Kaufmann.

[10] P. Domingos. The Master Algorithm: How the Quest

for the Ultimate Learning Machine Will Remake Our

World. Basic Books, New York, NY, 2015.

[11] P. Domingos and M. Pazzani. On the optimality of the

simple Bayesian classifier under zero-one loss. Machine

Learning, 29:103–130, 1997.

[12] G. Hulten and P. Domingos. Mining complex models

from arbitrarily large databases in constant time. In

Proceedings of the Eighth ACM SIGKDD International

Conference on Knowledge Discovery and Data Mining,

pages 525–531, Edmonton, Canada, 2002. ACM Press.

[13] D. Kibler and P. Langley. Machine learning as an

experimental science. In Proceedings of the Third

European Working Session on Learning, London, UK,

1988. Pitman.

[14] A. J. Klockars and G. Sax. Multiple Comparisons.

Sage, Beverly Hills, CA, 1986.

[15] R. Kohavi, R. Longbotham, D. Sommerfield, and

R. Henne. Controlled experiments on the Web: Survey

and practical guide. Data Mining and Knowledge

Discovery, 18:140–181, 2009.

[16] J. Manyika, M. Chui, B. Brown, J. Bughin, R. Dobbs,

C. Roxburgh, and A. Byers. Big data: The next

frontier for innovation, competition, and productivity.

Technical report, McKinsey Global Institute, 2011.

[17] T. M. Mitchell. Machine Learning. McGraw-Hill, New

York, NY, 1997.

[18] A. Y. Ng. Preventing “overfitting” of cross-validation

data. In Proceedings of the Fourteenth International

Conference on Machine Learning, pages 245–253,

Nashville, TN, 1997. Morgan Kaufmann.

[19] J. Pearl. On the connection between the complexity

and credibility of inferred models. International

Journal of General Systems, 4:255–264, 1978.

[20] J. Pearl. Causality: Models, Reasoning, and Inference.

Cambridge University Press, Cambridge, UK, 2000.

[21] J. R. Quinlan. C4.5: Programs for Machine Learning.

Morgan Kaufmann, San Mateo, CA, 1993.

[22] M. Richardson and P. Domingos. Markov logic

networks. Machine Learning, 62:107–136, 2006.

[23] J. Tenenbaum, V. Silva, and J. Langford. A global

geometric framework for nonlinear dimensionality

reduction. Science, 290:2319–2323, 2000.

[24] V. N. Vapnik. The Nature of Statistical Learning

Theory. Springer, New York, NY, 1995.

[25] I. Witten, E. Frank, and M. Hall. Data Mining:

Practical Machine Learning Tools and Techniques.

Morgan Kaufmann, San Mateo, CA, 3rd edition, 2011.

[26] D. Wolpert. The lack of a priori distinctions between

learning algorithms. Neural Computation,

About Manjunath 67 Articles

FavouriteBlog.com –
Favourite Blog about Artificial Intelligence, Bot- Manjunath

6 Comments

  1. hello!,I like your writing so much! percentage we be in contact more about
    your post on AOL? I need a specialist inn this space to unravel my problem.

    May be that is you! Looking aheaqd to seee you.

  2. I do accept as true with all of the ideas you have presennted for your post.
    They are really convincing and can certainly work.
    Still, the posts are very brief for newbies. May just you please
    prolong them a bit ffrom subsequent time? Thanks for
    the post.

  3. I do consider all of the concepts yyou have presented for ypur post.
    They’re very convincing and can certainly work.

    Nonetheless, tthe posts aree too quick for starters.
    Could you please lengthen them a littl fromm subsequent time?
    Thanks for the post.

  4. Heyy just wanted to give you a qick heads up and let you know a feww oof the pictures aren’t loading correctly.

    I’m not sure why but I think its a linking issue.
    I’ve tried it in two different browsers and both show the sae results.

Leave a Reply

Your email address will not be published.


*