전체 글
-
-
1. Closure, Decorator, GeneratorPython Study 2020. 2. 29. 01:08
*First Class Function : - 함수를 변수처럼, Object처럼 쓴다. [LINK] http://schoolofweb.net/blog/posts/%ED%8C%8C%EC%9D%B4%EC%8D%AC-%ED%8D%BC%EC%8A%A4%ED%8A%B8%ED%81%B4%EB%9E%98%EC%8A%A4-%ED%95%A8%EC%88%98-first-class-function/ *Closure : 1) Function 안에 Function. 2) 변수, 메모리 등을 효율적으로 사용할 수 있다. 3) 클로저는 함수의 프리변수 값을 어딘가에 저장한다. 4) 클로저는 이렇게 하나의 함수로 여러가지의 함수를 간단히 만들어낼 수 있게도 해주며, 기존에 만들어진 함수나 모듈등을 수정하지 않고도 wrapper 함수..
-
[PR#2] (Quantization) Binarized Neural Networks : Training Neural Networks with Weights and Activations Constrained to +1 or -1 (arXiv 16)Paper Review 2020. 1. 21. 15:41
[20.1.21] draft [LINK] : https://arxiv.org/abs/1602.02830 Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients..
-
[2020.Q1] Deep Learning Paper ListPaper Review 2020. 1. 21. 13:50
[Up Comming] [Reviewed] [Less Concerning] ____________________________________________________________________________________________________________________________________________________________ [Compression, FPGA, Weight Sharing] DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING Going Deeper with Embedded FPGA Platform for Convolutional..
-
[PR#1] (Pruning) Learning both Weights and Connections for Efficient Neural Networks (NIPS 2015)Paper Review 2020. 1. 7. 17:31
[LINK] : https://arxiv.org/abs/1506.02626 Learning both Weights and Connections for Efficient Neural Networks Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To arxiv.org _____________________..