Switch to unified view

a/README.md b/README.md
1
# NeurIPS 2019: Learn to Move - Walk Around
1
# NeurIPS 2019: Learn to Move - Walk Around
2
2
3
This repository contains software required for participation in the NeurIPS 2019 Challenge: Learn to Move - Walk Around. See more details about the challenge [here](https://www.aicrowd.com/challenges/neurips-2019-learn-to-move-walk-around). See full documentation of our reinforcement learning environment [here](https://osim-rl.stanford.edu). In this document we will give very basic steps to get you set up for the challenge!
3
This repository contains software required for participation in the NeurIPS 2019 Challenge: Learn to Move - Walk Around. See more details about the challenge [here](https://www.aicrowd.com/challenges/neurips-2019-learn-to-move-walk-around). See full documentation of our reinforcement learning environment [here](https://osim-rl.stanford.edu). In this document we will give very basic steps to get you set up for the challenge!
4
4
5
Your task is to develop a controller for a physiologically plausible 3D human model to walk or run following velocity commands with minimum effort. You are provided with a human musculoskeletal model and a physics-based simulation environment, OpenSim. There will be three tracks:
5
Your task is to develop a controller for a physiologically plausible 3D human model to walk or run following velocity commands with minimum effort. You are provided with a human musculoskeletal model and a physics-based simulation environment, OpenSim. There will be three tracks:
6
6
7
1) **Best performance**
7
1) **Best performance**
8
2) **Novel ML solution**
8
2) **Novel ML solution**
9
3) **Novel biomechanical solution**, where all the winners of each track will be awarded.
9
3) **Novel biomechanical solution**, where all the winners of each track will be awarded.
10
10
11
To model physics and biomechanics we use [OpenSim](https://github.com/opensim-org/opensim-core) - a biomechanical physics environment for musculoskeletal simulations.
11
To model physics and biomechanics we use [OpenSim](https://github.com/opensim-org/opensim-core) - a biomechanical physics environment for musculoskeletal simulations.
12
12
13
## What's new compared to NIPS 2017: Learning to run?
13
## What's new compared to NIPS 2017: Learning to run?
14
14
15
We took into account comments from the last challenge and there are several changes:
15
We took into account comments from the last challenge and there are several changes:
16
16
17
* You can use experimental data (to greatly speed up learning process)
17
* You can use experimental data (to greatly speed up learning process)
18
* We released the 3rd dimensions (the model can fall sideways)
18
* We released the 3rd dimensions (the model can fall sideways)
19
* We added a prosthetic leg -- the goal is to solve a medical challenge on modeling how walking will change after getting a prosthesis. Your work can speed up design, prototying, or tuning prosthetics!
19
* We added a prosthetic leg -- the goal is to solve a medical challenge on modeling how walking will change after getting a prosthesis. Your work can speed up design, prototying, or tuning prosthetics!
20
20
21
You haven't heard of NIPS 2017: Learning to run? [Watch this video!](https://www.youtube.com/watch?v=rhNxt0VccsE)
21
You haven't heard of NIPS 2017: Learning to run? [Watch this video!](https://www.youtube.com/watch?v=rhNxt0VccsE)
22
22
23
![HUMAN environment](https://s3.amazonaws.com/osim-rl/videos/running.gif)
23
![HUMAN environment](https://s3.amazonaws.com/osim-rl/videos/running.gif)
24
24
25
## Getting started
25
## Getting started
26
26
27
**Anaconda** is required to run our simulations. Anaconda will create a virtual environment with all the necessary libraries, to avoid conflicts with libraries in your operating system. You can get anaconda from here https://docs.anaconda.com/anaconda/install/. In the following instructions we assume that Anaconda is successfully installed.
27
**Anaconda** is required to run our simulations. Anaconda will create a virtual environment with all the necessary libraries, to avoid conflicts with libraries in your operating system. You can get anaconda from here https://docs.anaconda.com/anaconda/install/. In the following instructions we assume that Anaconda is successfully installed.
28
28
29
For the challenge we prepared [OpenSim](http://opensim.stanford.edu/) binaries as a conda environment to make the installation straightforward
29
For the challenge we prepared [OpenSim](http://opensim.stanford.edu/) binaries as a conda environment to make the installation straightforward
30
30
31
We support Windows, Linux, and Mac OSX (all in 64-bit). To install our simulator, you first need to create a conda environment with the OpenSim package.
31
We support Windows, Linux, and Mac OSX (all in 64-bit). To install our simulator, you first need to create a conda environment with the OpenSim package.
32
32
33
On **Windows**, open a command prompt and type:
33
On **Windows**, open a command prompt and type:
34
34
35
    conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
35
    conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
36
    activate opensim-rl
36
    activate opensim-rl
37
    pip install osim-rl
37
    pip install osim-rl
38
38
39
On **Linux/OSX**, run:
39
On **Linux/OSX**, run:
40
40
41
    conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
41
    conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
42
    source activate opensim-rl
42
    source activate opensim-rl
43
    pip install osim-rl
43
    pip install osim-rl
44
44
45
These commands will create a virtual environment on your computer with the necessary simulation libraries installed. If the command `python -c "import opensim"` runs smoothly, you are done! Otherwise, please refer to our [FAQ](http://osim-rl.stanford.edu/docs/faq/) section.
45
These commands will create a virtual environment on your computer with the necessary simulation libraries installed. If the command `python -c "import opensim"` runs smoothly, you are done! Otherwise, please refer to our [FAQ](http://osim-rl.stanford.edu/docs/faq/) section.
46
46
47
Note that `source activate opensim-rl` activates the anaconda virtual environment. You need to type it every time you open a new terminal.
47
Note that `source activate opensim-rl` activates the anaconda virtual environment. You need to type it every time you open a new terminal.
48
48
49
## Basic usage
49
## Basic usage
50
50
51
To execute 200 iterations of the simulation enter the `python` interpreter and run the following:
51
To execute 200 iterations of the simulation enter the `python` interpreter and run the following:
52
```python
52
```python
53
from osim.env import L2M2019Env
53
from osim.env import L2M2019Env
54
54
55
env = L2M2019Env(visualize=True)
55
env = L2M2019Env(visualize=True)
56
observation = env.reset()
56
observation = env.reset()
57
for i in range(200):
57
for i in range(200):
58
    observation, reward, done, info = env.step(env.action_space.sample())
58
    observation, reward, done, info = env.step(env.action_space.sample())
59
```
59
```
60
![Random walk](https://raw.githubusercontent.com/stanfordnmbl/osim-rl/1679344e509e29bdcc2ee368ddf83e868d93bf61/demo/random.gif)
60
![Random walk](https://raw.githubusercontent.com/stanfordnmbl/osim-rl/1679344e509e29bdcc2ee368ddf83e868d93bf61/demo/random.gif)
61
61
62
The function `env.action_space.sample()` returns a random vector for muscle activations, so, in this example, muscles are activated randomly (red indicates an active muscle and blue an inactive muscle).  Clearly with this technique we won't go too far.
62
The function `env.action_space.sample()` returns a random vector for muscle activations, so, in this example, muscles are activated randomly (red indicates an active muscle and blue an inactive muscle).  Clearly with this technique we won't go too far.
63
63
64
Your goal is to construct a controller, i.e. a function from the state space (current positions, velocities and accelerations of joints) to action space (muscle excitations), that will enable to model to travel as far as possible in a fixed amount of time. Suppose you trained a neural network mapping observations (the current state of the model) to actions (muscle excitations), i.e. you have a function `action = my_controller(observation)`, then
64
Your goal is to construct a controller, i.e. a function from the state space (current positions, velocities and accelerations of joints) to action space (muscle excitations), that will enable to model to travel as far as possible in a fixed amount of time. Suppose you trained a neural network mapping observations (the current state of the model) to actions (muscle excitations), i.e. you have a function `action = my_controller(observation)`, then
65
```python
65
```python
66
# ...
66
# ...
67
total_reward = 0.0
67
total_reward = 0.0
68
for i in range(200):
68
for i in range(200):
69
    # make a step given by the controller and record the state and the reward
69
    # make a step given by the controller and record the state and the reward
70
    observation, reward, done, info = env.step(my_controller(observation))
70
    observation, reward, done, info = env.step(my_controller(observation))
71
    total_reward += reward
71
    total_reward += reward
72
    if done:
72
    if done:
73
        break
73
        break
74
74
75
# Your reward is
75
# Your reward is
76
print("Total reward %f" % total_reward)
76
print("Total reward %f" % total_reward)
77
```
77
```
78
78
79
You can find details about the [observation object here](http://osim-rl.stanford.edu/docs/nips2018/observation/).
79
You can find details about the [observation object here](http://osim-rl.stanford.edu/docs/nips2018/observation/).
80
80
81
## Submission
81
## Submission
82
82
83
* Option 1: [**submit solution in docker container**](https://github.com/stanfordnmbl/neurips2019-learning-to-move-starter-kit)
83
* Option 1: [**submit solution in docker container**](https://github.com/stanfordnmbl/neurips2019-learning-to-move-starter-kit)
84
* Option 2: run controller on server environment: [**./examples/submission.py**](https://github.com/stanfordnmbl/osim-rl/blob/master/examples/submission.py)
84
* Option 2: run controller on server environment: [**./examples/submission.py**](https://github.com/stanfordnmbl/osim-rl/blob/master/examples/submission.py)
85
85
86
In order to make a submission to AIcrowd, please refer to [this page](https://github.com/AIcrowd/neurips2019-learning-to-move-starter-kit)
86
In order to make a submission to AIcrowd, please refer to [this page](https://github.com/AIcrowd/neurips2019-learning-to-move-starter-kit)
87
87
88
## Rules
88
## Rules
89
89
90
Organizers reserve the right to modify challenge rules as required.
90
Organizers reserve the right to modify challenge rules as required.
91
91
92
## Read more in [the official documentation](http://osim-rl.stanford.edu/)
92
## Read more in [the official documentation](http://osim-rl.stanford.edu/)
93
93
94
* [Osim-rl interface](http://osim-rl.stanford.edu/docs/nips2018/interface/)
94
* [Osim-rl interface](http://osim-rl.stanford.edu/docs/nips2018/interface/)
95
* [How to train a model?](http://osim-rl.stanford.edu/docs/training/)
95
* [How to train a model?](http://osim-rl.stanford.edu/docs/training/)
96
* [More on training models](http://osim-rl.stanford.edu/docs/resources/)
96
* [More on training models](http://osim-rl.stanford.edu/docs/resources/)
97
* [Experimental data](http://osim-rl.stanford.edu/docs/nips2018/experimental/)
97
* [Experimental data](http://osim-rl.stanford.edu/docs/nips2018/experimental/)
98
* [Physics underlying the model](http://osim-rl.stanford.edu/docs/nips2017/physics/)
98
* [Physics underlying the model](http://osim-rl.stanford.edu/docs/nips2017/physics/)
99
* [Frequently Asked Questions](http://osim-rl.stanford.edu/docs/faq/)
99
* [Frequently Asked Questions](http://osim-rl.stanford.edu/docs/faq/)
100
* [Citing and credits](http://osim-rl.stanford.edu/docs/credits/)
100
* [Citing and credits](http://osim-rl.stanford.edu/docs/credits/)
101
* [OpenSim documentation](http://opensim.stanford.edu/)
101
* [OpenSim documentation](http://opensim.stanford.edu/)
102
102
103
## Contributions of participants
103
## Contributions of participants
104
104
105
* [Understanding the Challenge](https://www.endtoend.ai/blog/ai-for-prosthetics-1) - Great materials from [@seungjaeryanlee](https://github.com/seungjaeryanlee/) on how to start
105
* [Understanding the Challenge](https://www.endtoend.ai/blog/ai-for-prosthetics-1) - Great materials from [@seungjaeryanlee](https://github.com/seungjaeryanlee/) on how to start
106
106
107
## Partners
108
109
<div class="markdown-wrap">
110
            <a target="_blank" href="https://cloud.google.com/">
111
              <img class="img-logo" height="50" src="https://dnczkxd1gcfu5.cloudfront.net/images/challenge_partners/image_file/27/google-cloud-logo.png">
112
</a>            <a target="_blank" href="http://deepmind.com/">
113
              <img class="img-logo" height="50" src="https://dnczkxd1gcfu5.cloudfront.net/images/challenge_partners/image_file/28/Deep-Mind-Health-WTT-10.05.15.jpg">
114
</a>            <a target="_blank" href="http://nvidia.com/">
115
              <img class="img-logo" height="50" src="https://dnczkxd1gcfu5.cloudfront.net/images/challenge_partners/image_file/29/nvidia.png">
116
</a>            <a target="_blank" href="http://opensim.stanford.edu/about/">
117
              <img class="img-logo" height="50" src="https://dnczkxd1gcfu5.cloudfront.net/images/challenge_partners/image_file/36/ncsrr.png">
118
</a>            <a target="_blank" href="https://www.tri.global/">
119
              <img class="img-logo" height="50" src="https://dnczkxd1gcfu5.cloudfront.net/images/challenge_partners/image_file/37/tri1.png">
120
</a>        </div>