[ML UTD 5] Machine Learning Up-To-Date

[ML UTD logo] human brain image on circuit board

Welcome to Machine Learning Up-To-Date (ML UTD) 5! The LifeWithData blog separates the signal from the noise in today’s hectic front lines of the intersection between software engineering and machine learning.

LifeWithData aims to consistently deliver curated machine learning newsletters that point the reader to key developments without massive amounts of backstory for each. This enables frequent, concise updates across the industry without overloading readers with information.

ML UTD 5 brings updates in the areas of publications, computing, and thought leadership.


[Publication] MIT Goes Underground for Self-driving Cars

Current self-driving car technology uses a combination of lidar and camera to “see” while driving, which experiences difficulty during weather conditions such as rain and snow. In these adverse conditions, the cars typically have large errors of their believed position in the road versus reality. MIT’s CSAIL laboratory has now used ground-penetrating radar (GPR) technology to form composition fingerprints of the ground beneath its self-driving cars that can be used to more effectively detect difficult road conditions.

That being said, the research is only preliminary and needs to go through many more tests. While the potential is truly exciting, the hardware is bulky and cost feasibility has not been determined.

MIT CSAIL’s LGPR at work on the road [source]

[Publication] DeepFakes are Rampant …but Mostly Just in Porn

NYU’s Journal of Legislation and Public Policy published a study on the rapid onset of the use of DeepFakes in the world, focusing mainly on its commoditization and threats posed therein.

The study shows how the technology is still in too early a stage of development to be effectively used for significant benefit or damage, but the level its commoditization has led to widespread usage in its original application: face-swapping in pornography.

Although most applications have been nefarious, the tide is beginning to turn, with applications such as multi-language addresses in election campaigns. Let’s hope that the application areas continue to become more beneficial and defense mechanisms equivalently capable of detection.

Based on our findings both on the surface and dark web, we assess that deepfakes are not being widely bought or sold for criminal or disinformation purposes as of early February 2020. One possible reason is that at the current stage of the commoditization of deepfakes, the outputs generated by open source tools are low quality and could not be effectively deployed for criminal purposes.

NYU JLPP

[Thougth Leadership] Gary Marcus Shares Vision for AI Through the 20s

Gary Marcus, an author, entrepreneur, and NYU psychology professor published a careful analysis of Artificial Intelligence’s progression, calling for a more cognition-based approach to AI in the coming decade.

Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.

Gary Marcus

Among other things, his analysis calls for a distinction between what is commonly attributed as superhuman intelligence and what that truly means. For example, an AI beating a human at the game of chess does not give it the all-elusive achievement of artificial general intelligence (AGI); rather, it simply shows that the AI is superhuman only in that narrowly-defined environment.

What’s important to consider is that machine learning is only a subset of the more general field of artificial intelligence. Many will balk at the belief that machine learning has not made fantastic strides in the past few decades. The truth is that machine learning has, while artificial intelligence still has a long way to go.


[Thought Leadership] Jurgen Schmidhuber On Deep Learning in This Decade and the Next

Jurgen Schmidhuber, the “father of Long Short-Term Memory Networks,” reflected on major achievements in deep learning this decade and presented his projections for the decade ahead.

His reflections include summaries of developments with FNNs, CNNs, RNNs, GANS, Deep RL, and a few others. If you are well-versed in the main ideas of deep learning but have lost track of references to the key papers through time, this article is a great set of pointers to them to keep around. Check out a demonstration of OpenAI Five, an applied combination of many of these developments, below.

OpenAI Five Gameplay

He then shifted into crystal ball mode, forecasting a more active presence of AI in the physical world and the rise of “see-and-do” robotics. It’s a great read and should take less than 10 minutes. Check it out here.


[Thought Leadership] VC Firm a16z on AI vs Software Businesses

Major venture capitalist firm Andreesen Horowitz wrote a very sobering summary of the difficulties of many companies trying to “plug and play” AI into products in the same manner as software. The difficulties are as follows, and it doesn’t take an MBA to realize that these are potentially crippling product weaknesses for a business.

  1. Lower gross margins
  2. Scaling challenges
  3. Weaker defensive moats

These challenges are no surprise to those entrenched in the endeavor in their day jobs, but this article serves as a great reality check for others to balance against the general hype of artificial intelligence auto-magically solving all of their business problems.

However, we have noticed in many cases that AI companies simply don’t have the same economic construction as software businesses. At times, they can even look more like traditional services companies.

Martin Casado and Matt Bornstein [source]

The above quote resonated in a massive way with me, based on some of my experiences at Pindrop. I believe that this trend of AI-based products needing a high level of service-based support occurs as the product’s core competency becomes more “learned” rather than programmed. It is much easier for the average consumer to comprehend the “if this then that” of software than the “if this then probably something like that” of AI-powered applications.


[Computing] Swift as the Next Python?

Aside from Fast AI’s adoption of Pytorch for its version 1.0, you may have heard of its founder Jeremy Howard having strong support for Swift as a potentially dominant language in the data science community. He has assisted in the Swift for Tensorflow effort, which has made huge progress towards completion. In a recent blog post, Tryo Labs further endorsed Swift’s adoption in the data science community.

It can sometimes be difficult to separate healthy comparison from canonical “language wars,” but the arguments presented in support of Swift cannot be ignored. I’ve long held the opinion that Python is popular in data science for two main reasons: simplicity to write and a healthy library ecosystem. Many other languages have generated hype via simple code resulting in faster execution times than Python; however, the healthy libraries of Python prevailed. With a Google-sized effort to incorporate Swift into Tensorflow, this could change things.


Stay Up To Date

That’s all for ML UTD 5. However, things are happening very quickly in academics and industry! Aside from ML UTD, keep yourself updated with the blog at LifeWithData.

If you’re not a fan of newsletters, but still want to stay in the loop, consider adding lifewithdata.org/blog and lifewithdata.org/tag/ml-utd to a Feedly aggregation setup.

0 Replies to “[ML UTD 5] Machine Learning Up-To-Date”

This site uses Akismet to reduce spam. Learn how your comment data is processed.