We compare the performance of an LSTM network both with and without cuDNN in Chainer. The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as LSTM, CNN.
Chainer-based deep reinforcement learning library, ChainerRL has been released. https://github.com/pfnet/chainerrl
At Deep Learning Summit 2017 in San Francisco on this January, PFN announced advancements on distributed deep learning using Chainer in multi-node environment. In this post, I would like to explain the detail of the announcement.
Recently we found some great research projects that are using Chainer for their algorithm implementations and experiments. We searched for such publicly available projects on arXiv and summarized them here as a table that lists papers along with their URL links: Research projects using Chainer.
We are planning the first major update of Chainer! It is currently scheduled for next March or April.
Starting from the release of v1.18.0, we will have one release every four weeks instead of the current cycle of one release every two weeks.
This is the official blog of Chainer, a framework for neural networks. In this blog, we will provide information about Chainer and its development, including:
Chainer is a Python-based, standalone open source framework for deep learning models. Chainer provides a flexible, intuitive, and high performance means of implementing a full range of deep learning models, including state-of-the-art models such as recurrent neural networks and variational autoencoders.
- Performance comparison of LSTM with and without cuDNN(v5) in Chainer
- ChainerRL - Deep Reinforcement Learning Library
- Performance of Distributed Deep Learning using ChainerMN
- Research projects using Chainer
- Plan of v2
- New release cycle
- About Chainer Blog