Data Science & Machine Learning Newsletter # 98
Posted on Fri 14 July 2017 in Data Science & Machine Learning Newsletter
You want to get updates? Please join Data Science & Machine Learning Newsletter Linked Group

CAN (Creative Adversarial Network)  Explained
[caption id="attachment_2318" align="alignright" width="300"] CAN (Creative Adversarial Network)  Explained[/caption]
 GANs (Generative Adversarial Networks), a type of Deep Learning networks, have been very successful in creating nonprocedural content. This work explores the possibility of machine generated creative content.

109 Commonly Asked Data Science Interview Questions
 For a data science interview, an interviewer will ask questions spanning a wide range of topics, requiring strong technical knowledge and communication skills from the part of the interviewee. ... This guide contains all of the data science interview questions an interviewee should expect when interviewing for a position as a data scientist.
 https://github.com/bloomberg/bqplot
 Bloomberg's visualization library
 Text Classifier Algorithms in Machine Learning
 In this article, we’ll focus on the few main generalized approaches of text classifier algorithms and their use cases.
 Modeling Agents with Probabilistic Programs
 This book describes and implements models of rational agents for (PO)MDPs and Reinforcement Learning.
 A Deep Introduction to Julia for Data Science and Scientific Computing
 This workshop is put together by Chris Rackauckas as part of the UC Irvine Data Science Initiative. This workshop is made to teach people who are experienced with other scripting languages the relatively new language Julia. Unlike the other Data Science Initiative workshops, this workshop assumes prior knowledge of some form of programming in a language such as Python, R, or MATLAB.
 Applying deep learning to realworld problems
 In this blog post I want to share three key learnings, which helped us at Merantix when applying deep learning to realworld problems: Learning I: the value of pretraining Learning II: caveats of realworld label distributions Learning III: understanding black box models