博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
WHAT I READ FOR DEEP-LEARNING
阅读量:5158 次
发布时间:2019-06-13

本文共 2667 字,大约阅读时间需要 8 分钟。

Today, I spent some time on two new papers proposing a new way of training very deep neural networks () and a new activation function for Auto-Encoders (

) which evades the use of any regularization methods such as Contraction or Denoising.

Lets start with the first one.  proposes a new activation type similar to LTSM networks and they claim that this peculiar activation is robust to any choice of initialization scheme and learning problems occurred at very deep NNs. It is also incentive to see that they trained models with >100 number of layers. The basic intuition here is to learn a gating function attached to a real activation function that decides to pass the activation or the input itself. Here is the formulation

T(x,Wt) is the gating function and H(x,WH) is the real activation. They use Sigmoid activation for gating and Rectifier for the normal activation in the paper. I also implemented it with Lasagne and tried to replicate the results (I aim to release the code later). It is really impressive to see its ability to learn for 50 layers (this is the most I can for my PC).

The other paper   suggests the use of non-biased rectifier units for the inference of AEs. You can train your model with a biased Rectifier Unit but at the inference time (test time), you should extract features by ignoring bias term. They show that doing so gives better recognition at CIFAR dataset. They also device a new activation function which has the similar intuition to Highway Networks.  Again, there is a gating unit which thresholds the normal activation function.

The first equation is the threshold function with a predefined threshold (they use 1 for their experiments).  The second equation shows the reconstruction of the proposed model. Pay attention that, in this equation they use square of a linear activation for thresholding and they call this model TLin  but they also use normal linear function which is called TRec. What this activation does here is to diminish the small activations so that the model is implicitly regularized without any additional regularizer. This is actually good for learning over-complete representation for the given data.

For more than this silly into, please refer to papers :) and warn me for any mistake.

These two papers shows a new coming trend to Deep Learning community which is using complex activation functions . We can call it controlling each unit behavior in a smart way instead of letting them fire naively. My notion also agrees with this idea. I believe even more complication we need for smart units in our deep models like Spike and Slap networks.

 

转载于:https://www.cnblogs.com/yymn/p/4498514.html

你可能感兴趣的文章
转载:《TypeScript 中文入门教程》 5、命名空间和模块
查看>>
苹果开发中常用英语单词
查看>>
[USACO 1.4.3]等差数列
查看>>
Shader Overview
查看>>
Reveal 配置与使用
查看>>
Java中反射的学习与理解(一)
查看>>
C语言初学 俩数相除问题
查看>>
B/S和C/S架构的区别
查看>>
[Java] Java record
查看>>
jQuery - 控制元素显示、隐藏、切换、滑动的方法
查看>>
postgresql学习文档
查看>>
Struts2返回JSON数据的具体应用范例
查看>>
js深度克隆对象、数组
查看>>
socket阻塞与非阻塞,同步与异步
查看>>
团队工作第二天
查看>>
System类
查看>>
tableView
查看>>
Happy Great BG-卡精度
查看>>
Xamarin Visual Studio不识别JDK路径
查看>>
菜鸟“抄程序”之道
查看>>