- lambda cannot upload s3 https://stackoverflow.com/questions/35589641/aws-lambda-function-getting-access-denied-when-getobject-from-s3 improving node performance https://www.smashingmagazine.com/2012/11/writing-fast-memory-efficient-javascript/Created
6 Mar 2018 - https://www.jair.org/media/4992/live-4992-9623-jair.pdf In this paper the author explain the basic of nlp and also the data structure for inputting word to neural network. 2.https://lti.cs.cmu.edu/sites/default/files/research/thesis/2011/michael_heilman_automatic_factual_question_generation_for_reading_assessment.pdf Generate question based on input textCreated
5 Mar 2018 - reference: https://appliedmachinelearning.wordpress.com/2017/09/28/topic-modelling-part-2-discovering-topics-from-articles-with-latent-dirichlet-allocation/ https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topic-modeling-in-python/Created
5 Feb 2018 - Ensemble have two type technique which is known as bagging and boosting. Bagging Building many predictors or model Take random sub-sample or bootstrap of row(data) Average the result or take the majority vote Every model must loosely correlate with each other to reduce variance Boosting Predictors are made sequentially Predictors learn from previous predictor mistake Very fast Can lead to overfitting Reference https://medium.com/mlreview/gradient-boosting-from-scratch-1e317ae4587d https://www.dataquest.io/blog/introduction-to-ensembles/Created
29 Jan 2018 - https://console.cloud.google.com/cloudshell/editor?shellonly=true https://colab.research.google.com/notebook https://quillbot.com/ # paraphrase bot http://ndres.me/kaggle-past-solutions/Created
13 Jan 2018 - https://towardsdatascience.com/linear-algebra-cheat-sheet-for-deep-learning-cd67aba4526c (Linear Algebra basic) https://harishnarayanan.org/writing/artistic-style-transfer/ https://appliedgo.net/perceptron/#inside-an-artificial-neuron https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives https://medium.com/@nokkk/jupyter-notebook-tricks-for-data-science-that-enhance-your-efficiency-95f98d3adee4 https://medium.com/@bwest87/building-a-deep-neural-net-in-google-sheets-49cdaf466da0 https://codeburst.io/jupyter-notebook-tricks-for-data-science-that-enhance-your-efficiency-95f98d3adee4 https://najeebkhan.github.io/blog/VecCal.html (Jacobian vs Hessian) Optimization algo bird eye view http://fa.bianp.net/teaching/2018/eecs227at/ https://nbviewer.jupyter.org/github/groverpr/learn_python_libraries/blob/master/pandas/pandas_cheatsheet.ipynb basic pandas df https://github.com/Stephen-Rimac/Python-for-Data-Scientists/blob/master/Python%20for%20Data%20Scientists.ipynb data scientist https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c precision and recall https://medium.com/@yu4u/why-mobilenet-and-its-variants-e-g-shufflenet-are-fast-1c7048b9618d mobilenet https://qiita.com/odanado/items/ffb685ba48f8a2a51683 embedding visualization https://medium.com/@shivamgoel1791/everything-you-need-to-know-about-neural-style-transfer-994530cc9a6e neural style transfer https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ // nmt with attention https://www.kaggle.com/annavictoria/ml-friendly-public-datasets https://machinelearningmastery.com/how-to-use-statistics-to-identify-outliers-in-data/?__s=soesfvn8qaszfihuwqqp https://tuatini.me/part-1-how-to-setup-your-own-environment-for-deep-learning/ https://christophm.github.io/interpretable-ml-book/intro.html https://towardsdatascience.com/semantic-segmentation-with-deep-learning-a-guide-and-code-e52fc8958823 Visualization https://projector.tensorflow.org/ https://github.com/tensorflow/lucid#notebooks https://medium.com/@Zelros/a-brief-history-of-machine-learning-models-explainability-f1c3301be9dc http://www.benfrederickson.com/numerical-optimization/Created
10 Jan 2018 - The class start with the course from fast.ai by Jeremy Howard. Since I already watch his video before without implementing it, by joining this event I can start implement the tutorial based on the video and can follow along with the other participant. Some participant ask some good question and also help me understand the course better. Next session is by learning deep learning theory from Stanford STAT385 course material. The participants need to read 1 out of 7 research paper related to convolutional neural network and divide into 7 group.Created
6 Jan 2018 - Deep learning is one of the current trends in IT technologies. Google have exponentially increased the number of product they are developed using deep learning which is kind of interesting to me. I tried to develop my own deep learning model before and stuck on how and what actually happens in it since deep learning or neural network is a black box model. I slowly start learning from the top by cloning someone else codes and understanding what happening and understand the reason behind it by sometime reading their blog post or from research paper but still stuck at some point.Created
31 Dec 2017 - 1. https://research.googleblog.com/2017/12/tacotron-2-generating-human-like-speech.html 2. https://docs.opencv.org/3.4.1/d9/dab/tutorial_homography.html planar for AR 3. https://rajatvd.github.io/Animating-Doodles-With-Autoencoders/Created
28 Dec 2017 - September 2017 When I first join Vase, the very first task I got is optimizing the algorithm for querying interlock criteria and make sure the ratio for given criteria is correct. The program cannot achieve exact ratio number since the number of data/row not enough to get. I need to improve the result and make sure the return ratio is as nearest to request ratio. It was such a challenging task for me since the criteria not interlock in one or two dimension but n dimensions which is very hard to optimize.Created
25 Dec 2017