Notes, tags, actions, locations, audio files, images and URLs can be attached to tasks. Tasks can be sorted in tabbed views by date, priority or user-defined criteria. is a simple but sophisticated task manager. startswith ( 'download_warning' ): return value return None def save_response_content ( response, destination ): CHUNK_SIZE = 32768 with open ( destination, "wb" ) as f : for chunk in response.
get ( URL, params = params, stream = True ) save_response_content ( response, destination ) def get_confirm_token ( response ): for key, value in response. text ), repeat = False, shuffle = True ) vocab_size = len ( TEXT. splits (( train_data, valid_data, test_data ), batch_size = 32, sort_key = lambda x : len ( x. seed ( seed )) train_iter, valid_iter, test_iter = data. split ( split_ratio = 0.7, random_state = random. build_vocab ( train_data ) train_data, valid_data = train_data. build_vocab ( train_data, vectors = emb_vectors ) LABEL. float ) train_data, test_data = datasets. Field ( sequential = True, tokenize = tokenize, lower = True, include_lengths = True, batch_first = True, fix_length = sentence_length ) LABEL = data. title ( title ) def load_dataset ( emb_vectors, sentence_length = 50, seed = 522 ): TEXT = data. plot ( x, val, label = val_label, color = color, linestyle = '-' ) plt. plot ( x, train, label = train_label, color = color ) plt. norm ( vec_b )) def tokenize ( sentences ): #Tokenize the sentence #from nltk.tokenize library use word_tokenize token = word_tokenize ( sentences ) return token def plot_train_val ( x, train, val, train_label, val_label, title, y_label, color ): plt. # Helper functions import requests def cosine_similarity ( vec_a, vec_b ): """Compute cosine similarity between vec_a and vec_b""" return np. Section 3: Neural Net with word embeddingsĬoding Exercise 3.1: Simple Feed Forward Netĭownload embeddings and clear old variables to clean memory. Section 2.3: Exploring meaning with word embeddingsĭownload FastText English Embeddings of dimension 100 (Bonus) Exercise 1.3: Transition probabilities Section 1.3: What is a Hidden Markov Model?įunction to create default Multinomial HMM modelįunction to create default Multinomial HMM model information of relative frequencies of wordsįunction to generate words given a hmm model Think! 1.2: How does changing parameters affect the generated sentences? Section 1.2: What is a Markov Chain or Model?įunction for sampling next word with weightsįunction for a stochastic chain using weighted choiceįunction for stochastic Chain for sets of wordsįunction to sample next word after sequence Why is this relevant? How are these sequences related to modern recurrent neural networks? Section 1: Sequences, Markov Chains & HMMs Moving beyond Labels: Finetuning CNNs on BOLD responseįocus on what matters: inferring low-dimensional dynamics from neural recordings Vision with Lost Glasses: Modelling how the brain deals with noisy input Performance Analysis of DQN Algorithm on the Lunar Lander task NMA Robolympics: Controlling robots using reinforcement learning Something Screwy - image recognition, detection, and classification of screwsĭata Augmentation in image classification models Music classification and generation with spectrograms Knowledge Extraction from a Convolutional Neural Network Tutorial 2: Out-of-distribution (OOD) LearningĮxample Model Project: the Train Illusion Tutorial 1: Introduction to Continual Learning Tutorial 1: Introduction to Reinforcement Learning
Tutorial 1: Un/Self-supervised learning methods Unsupervised And Self Supervised Learning (W3D1) (Bonus) Tutorial 4: Deploying Neural Networks on the Webĭeep Learning: Doing more with fewer parameters Wrap-up Tutorial 3: Conditional GANs and Implications of GAN Technology Tutorial 1: Variational Autoencoders (VAEs)
Tutorial 1: Learn how to work with Transformers Tutorial 2: Modern RNNs and their variants Tutorial 1: Modeling sequencies and encoding text (Bonus) Tutorial 2: Facial recognition using modern convnets Tutorial 1: Learn how to use modern convnets
Tutorial 2: Regularization techniques part 2Ĭonvnets And Recurrent Neural Networks (W2D1) Tutorial 1: Regularization techniques part 1 Tutorial 1: Gradient Descent and AutoGrad