Latest YouTube Video

Sunday, September 25, 2016

$A^2T$: Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources. (arXiv:1510.02879v3 [cs.AI] UPDATED)

The ability to transfer knowledge from source tasks to a new target task can be very useful in speeding up a Reinforcement Learning agent. Such transfer has been receiving a lot of attention lately, yet the application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to do selective transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose $A^2T$ (Attend, Adapt and Transfer), an attentive deep architecture for adaptive transfer, which addresses these challenges. $A^2T$ is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that $A^2T$ is an effective architecture for transfer learning by being able to avoid negative transfer while transferring selectively from multiple sources.



from cs.AI updates on arXiv.org http://ift.tt/1MtHoln
via IFTTT

No comments: