You want to get updates? Please join Data Sci­ence & Machine Learn­ing Newslet­ter Linked Group

  • CAN (Cre­ative Adver­sar­ial Net­work) - Explained
    CAN (Creative Adversarial Network) - Explained
    CAN (Cre­ative Adver­sar­ial Net­work) - Explained
    • GANs (Gen­er­a­tive Adver­sar­ial Net­works), a type of Deep Learn­ing net­works, have been very suc­cess­ful in cre­at­ing non-procedural con­tent. This work explores the pos­si­bil­ity of machine gen­er­ated cre­ative content.
  • 109 Com­monly Asked Data Sci­ence Inter­view Ques­tions
    • For a data sci­ence inter­view, an inter­viewer will ask ques­tions span­ning a wide range of top­ics, requir­ing strong tech­ni­cal knowl­edge and com­mu­ni­ca­tion skills from the part of the inter­vie­wee.  … This guide con­tains all of the data sci­ence inter­view ques­tions an inter­vie­wee should expect when inter­view­ing for a posi­tion as a data scientist.
    • Bloomberg’s visu­al­iza­tion library
  • Text Clas­si­fier Algo­rithms in Machine Learn­ing
    • In this arti­cle, we’ll focus on the few main gen­er­al­ized approaches of text clas­si­fier algo­rithms and their use cases.
  • Mod­el­ing Agents with Prob­a­bilis­tic Pro­grams
    • This book describes and imple­ments mod­els of ratio­nal agents for (PO)MDPs and Rein­force­ment Learning.
  • A Deep Intro­duc­tion to Julia for Data Sci­ence and Sci­en­tific Com­put­ing
    • This work­shop is put together by Chris Rack­auckas as part of the UC Irvine Data Sci­ence Ini­tia­tive. This work­shop is made to teach peo­ple who are expe­ri­enced with other script­ing lan­guages the rel­a­tively new lan­guage Julia. Unlike the other Data Sci­ence Ini­tia­tive work­shops, this work­shop assumes prior knowl­edge of some form of pro­gram­ming in a lan­guage such as Python, R, or MATLAB.
  • Apply­ing deep learn­ing to real-world prob­lems
    • In this blog post I want to share three key learn­ings, which helped us at Mer­an­tix when apply­ing deep learn­ing to real-world prob­lems: Learn­ing I: the value of pre-training Learn­ing II: caveats of real-world label dis­tri­b­u­tions Learn­ing III: under­stand­ing black box models