|
a |
|
b/docs/source/tutorials.rst |
|
|
1 |
.. _tutorials: |
|
|
2 |
|
|
|
3 |
Tutorials |
|
|
4 |
###################### |
|
|
5 |
|
|
|
6 |
|
|
|
7 |
|
|
|
8 |
Here a set of examples on how to use different MyoSuite models and non-stationarities. |
|
|
9 |
|
|
|
10 |
It is highly recommended to read through the `OpenAI Gym API <https://gymnasium.farama.org/>`__ to get familiar with the Gym API |
|
|
11 |
|
|
|
12 |
* :ref:`run_myosuite` |
|
|
13 |
* :ref:`run_visualize_index_movements` |
|
|
14 |
* :ref:`run_trained_policy` |
|
|
15 |
* :ref:`advanced_muscle_conditions` |
|
|
16 |
* :ref:`test_muscle_fatigue` |
|
|
17 |
* :ref:`test_sarcopenia` |
|
|
18 |
* :ref:`test_tendon_transfer` |
|
|
19 |
* :ref:`exoskeleton` |
|
|
20 |
* :ref:`use_reinforcement_learning` |
|
|
21 |
* :ref:`resume_training` |
|
|
22 |
* :ref:`load_deprl_baseline` |
|
|
23 |
* :ref:`load_MyoReflex_baseline` |
|
|
24 |
* :ref:`customizing_tasks` |
|
|
25 |
|
|
|
26 |
.. _jupyter_notebook: |
|
|
27 |
|
|
|
28 |
Tutorials on Jupyter-Notebook |
|
|
29 |
======================================== |
|
|
30 |
Please refer to our tutorials on the key functionalities, such as model usage and examples of using RL on Jupyter-Notebook `here <https://github.com/facebookresearch/myosuite/tree/main/docs/source/tutorials>`__ |
|
|
31 |
|
|
|
32 |
There are also tutorials for our ICRA workshops: `ICRA-2023 <https://colab.research.google.com/drive/1zFuNLsrmx42vT4oV8RbnEWtkSJ1xajEo>`__ (example of running a simple myosuite environment) |
|
|
33 |
, `ICRA-2024 <https://colab.research.google.com/drive/1JwxE7o6Z3bqCT4ewELacJ-Z1SV8xFhKK#scrollTo=QDppGIzHB9Zu>`__ (example of hand object manipulation) |
|
|
34 |
|
|
|
35 |
|
|
|
36 |
.. _run_myosuite: |
|
|
37 |
|
|
|
38 |
Test Environment |
|
|
39 |
====================== |
|
|
40 |
Example on how to use an environment e.g. send random movements |
|
|
41 |
|
|
|
42 |
.. code-block:: python |
|
|
43 |
|
|
|
44 |
from myosuite.utils import gym |
|
|
45 |
env = gym.make('myoElbowPose1D6MRandom-v0') |
|
|
46 |
env.reset() |
|
|
47 |
for _ in range(1000): |
|
|
48 |
env.mj_render() |
|
|
49 |
env.step(env.action_space.sample()) # take a random action |
|
|
50 |
env.close() |
|
|
51 |
|
|
|
52 |
|
|
|
53 |
.. _run_visualize_index_movements: |
|
|
54 |
|
|
|
55 |
Activate and visualize finger movements |
|
|
56 |
============================================ |
|
|
57 |
Example on how to generate and visualize a movement e.g. index flexion, and visualize the results |
|
|
58 |
|
|
|
59 |
.. code-block:: python |
|
|
60 |
|
|
|
61 |
from myosuite.utils import gym |
|
|
62 |
env = gym.make('myoHandPoseRandom-v0') |
|
|
63 |
env.reset() |
|
|
64 |
for _ in range(1000): |
|
|
65 |
env.mj_render() |
|
|
66 |
env.step(env.action_space.sample()) # take a random action |
|
|
67 |
env.close() |
|
|
68 |
|
|
|
69 |
.. _run_trained_policy: |
|
|
70 |
|
|
|
71 |
Test trained policy |
|
|
72 |
====================== |
|
|
73 |
Example on using a policy e.g. elbow flexion, and change non-stationaries |
|
|
74 |
|
|
|
75 |
.. code-block:: python |
|
|
76 |
|
|
|
77 |
from myosuite.utils import gym |
|
|
78 |
policy = "iterations/best_policy.pickle" |
|
|
79 |
|
|
|
80 |
import pickle |
|
|
81 |
pi = pickle.load(open(policy, 'rb')) |
|
|
82 |
|
|
|
83 |
env = gym.make('myoElbowPose1D6MRandom-v0') |
|
|
84 |
env.reset() |
|
|
85 |
for _ in range(1000): |
|
|
86 |
env.mj_render() |
|
|
87 |
env.step(env.action_space.sample()) # take a random action |
|
|
88 |
|
|
|
89 |
|
|
|
90 |
.. _advanced_muscle_conditions: |
|
|
91 |
|
|
|
92 |
Advanced Muscle Conditions |
|
|
93 |
========================================= |
|
|
94 |
|
|
|
95 |
Besides from the simulation of healthy muscle conditions, Myosuite also provides features to simulate a number of muscle deficiencies. We aim provides a safe and trust-worthy environment for healthcare or rehabilitation strategies development leveraging the help of a simulator. |
|
|
96 |
|
|
|
97 |
.. _test_muscle_fatigue: |
|
|
98 |
|
|
|
99 |
Muscle Fatigue |
|
|
100 |
+++++++++++++++++++++++++++++++++++++ |
|
|
101 |
Muscle Fatigue is a short-term (second to minutes) effect that happens after sustained or repetitive voluntary movement |
|
|
102 |
and it has been linked to traumas e.g. cumulative trauma disorder (Chaffin et al. (2006)). |
|
|
103 |
A dynamic muscle fatigue model (`Cheema et al. (2020) <https://dl.acm.org/doi/pdf/10.1145/3313831.3376701>`__) was integrated into the modeling framework. |
|
|
104 |
This model was based on the idea that different types of muscle fiber that have different contributions |
|
|
105 |
and resistance to fatigue (Vøllestad (1997)). |
|
|
106 |
The current implementation is simplified to consider the same fatigue factor for all muscles and |
|
|
107 |
that muscle can be completely fatigued. |
|
|
108 |
|
|
|
109 |
.. image:: images/Fatigue.png |
|
|
110 |
:width: 800 |
|
|
111 |
|
|
|
112 |
|
|
|
113 |
This example shows how to add fatigue to a model. The muscle force will gradually decrease as a result of repeated actions. It tests random actions on a model without and then with muscle fatigue. |
|
|
114 |
|
|
|
115 |
.. code-block:: python |
|
|
116 |
|
|
|
117 |
from myosuite.utils import gym |
|
|
118 |
env = gym.make('myoElbowPose1D6MRandom-v0') |
|
|
119 |
env.reset() |
|
|
120 |
for _ in range(1000): |
|
|
121 |
env.mj_render() |
|
|
122 |
env.step(env.action_space.sample()) # take a random action |
|
|
123 |
|
|
|
124 |
# Add muscle fatigue |
|
|
125 |
env = gym.make('myoFatiElbowPose1D6MRandom-v0') |
|
|
126 |
env.reset() |
|
|
127 |
for _ in range(1000): |
|
|
128 |
env.mj_render() |
|
|
129 |
env.step(env.action_space.sample()) # take a random action |
|
|
130 |
env.close() |
|
|
131 |
|
|
|
132 |
More advanced examples as well as detailed explanations can be found in `this tutorial<https://github.com/MyoHub/myosuite/tree/main/docs/source/tutorials/7_Fatigue_Modeling.ipynb>`. |
|
|
133 |
|
|
|
134 |
.. _test_sarcopenia: |
|
|
135 |
|
|
|
136 |
Sarcopenia |
|
|
137 |
+++++++++++++++++++++++++++++++++++++ |
|
|
138 |
|
|
|
139 |
Sarcopenia is a muscle disorder that occurs commonly in the elderly population (Cruz-Jentoft and Sayer (2019)) |
|
|
140 |
and characterized by a reduction in muscle mass or volume. |
|
|
141 |
The peak in grip strength can be reduced up to 50% from age 20 to 40 (Dodds et al. (2016)). |
|
|
142 |
We modeled sarcopenia for each muscle as a reduction of 50% of its maximal isometric force. |
|
|
143 |
|
|
|
144 |
This example shows how to add sarcopenia or muscle weakness to a model. The maximum muscle force will be reduced. It tests random actions on a model without and then with muscle weakness. |
|
|
145 |
|
|
|
146 |
.. code-block:: python |
|
|
147 |
|
|
|
148 |
from myosuite.utils import gym |
|
|
149 |
env = gym.make('myoElbowPose1D6MRandom-v0') |
|
|
150 |
env.reset() |
|
|
151 |
for _ in range(1000): |
|
|
152 |
env.mj_render() |
|
|
153 |
env.step(env.action_space.sample()) # take a random action |
|
|
154 |
|
|
|
155 |
# Add muscle weakness |
|
|
156 |
env = gym.make('myoSarcElbowPose1D6MRandom-v0') |
|
|
157 |
env.reset() |
|
|
158 |
for _ in range(1000): |
|
|
159 |
env.mj_render() |
|
|
160 |
env.step(env.action_space.sample()) # take a random action |
|
|
161 |
env.close() |
|
|
162 |
|
|
|
163 |
|
|
|
164 |
.. _test_tendon_transfer: |
|
|
165 |
|
|
|
166 |
Physical tendon transfer |
|
|
167 |
+++++++++++++++++++++++++++++++++++++ |
|
|
168 |
Contrary to muscle fatigue or sarcopenia that occurs to all muscles, tendon transfer surgery can target a single |
|
|
169 |
muscle-tendon unit. Tendon transfer surgery allows redirecting the application point of muscle forces from one joint |
|
|
170 |
DoF to another (see below). It can be used to regain functional control of a joint or limb motion after injury. |
|
|
171 |
One of the current procedures in the hand is the tendon transfer of Extensor Indicis Proprius (EIP) to replace the |
|
|
172 |
Extensor Pollicis Longus (EPL) (Gelb (1995)). Rupture of the EPL can happen after a broken wrist and create a loss of control |
|
|
173 |
of the Thumb extension. We introduce a physical tendon transfer where the EIP application point of the tendon was moved |
|
|
174 |
from the index to the thumb and the EPL was removed. |
|
|
175 |
|
|
|
176 |
.. image:: images/tendon_transfer.png |
|
|
177 |
:width: 400 |
|
|
178 |
|
|
|
179 |
This example shows how load a model with physical tendon transfer. This simulates a redirected muscle actuations |
|
|
180 |
|
|
|
181 |
.. code-block:: python |
|
|
182 |
|
|
|
183 |
from myosuite.utils import gym |
|
|
184 |
env = gym.make('myoHandKeyTurnFixed-v0') |
|
|
185 |
env.reset() |
|
|
186 |
for _ in range(1000): |
|
|
187 |
env.mj_render() |
|
|
188 |
env.step(env.action_space.sample()) # take a random action |
|
|
189 |
|
|
|
190 |
# Add tendon transfer |
|
|
191 |
env = gym.make('myoTTHandKeyTurnFixed-v0') |
|
|
192 |
env.reset() |
|
|
193 |
for _ in range(1000): |
|
|
194 |
env.mj_render() |
|
|
195 |
env.step(env.action_space.sample()) # take a random action |
|
|
196 |
env.close() |
|
|
197 |
|
|
|
198 |
.. _exoskeleton: |
|
|
199 |
|
|
|
200 |
Exoskeleton assistance |
|
|
201 |
+++++++++++++++++++++++++++++++++++++ |
|
|
202 |
Exoskeleton assisted rehabilitation is becoming more and more common practice (Jezernik et al. (2003)) due to its multiple benefit (Nam et al. (2017)). |
|
|
203 |
Modeling of an exoskeleton for the elbow was done via an ideal actuator and the addition of two supports with a weight of 0.101 Kg for the upper arm and 0.111 Kg on the forearm. The assistance given by the exoskeleton was a percentage of the biological joint torque, this was based on the neuromusculoskeletal controller presented in Durandau et al. (2019). |
|
|
204 |
|
|
|
205 |
|
|
|
206 |
The models and code will be released soon. |
|
|
207 |
|
|
|
208 |
.. image:: images/elbow_exo.png |
|
|
209 |
:width: 200 |
|
|
210 |
|
|
|
211 |
|
|
|
212 |
|
|
|
213 |
.. _use_reinforcement_learning: |
|
|
214 |
|
|
|
215 |
Using Reinforcement Learning |
|
|
216 |
============================================= |
|
|
217 |
Myosuite provdies features to support RL training. Here are examples of using different RL libraries on Myosuite. |
|
|
218 |
|
|
|
219 |
|
|
|
220 |
|
|
|
221 |
.. _resume_training: |
|
|
222 |
|
|
|
223 |
Resume Learning of policies |
|
|
224 |
+++++++++++++++++++++++++++++++++++++ |
|
|
225 |
When using ``mjrl`` it might be needed to resume training of a policy locally. It is possible to use the following instruction |
|
|
226 |
|
|
|
227 |
.. code-block:: bash |
|
|
228 |
|
|
|
229 |
python3 hydra_mjrl_launcher.py --config-path config --config-name hydra_biomechanics_config.yaml hydra/output=local hydra/launcher=local env=myoHandPoseRandom-v0 job_name=[Absolute Path of the policy] rl_num_iter=[New Total number of iterations] |
|
|
230 |
|
|
|
231 |
.. _load_deprl_baseline: |
|
|
232 |
|
|
|
233 |
Load DEP-RL Baseline |
|
|
234 |
+++++++++++++++++++++++++++++++++++++ |
|
|
235 |
See `here <https://deprl.readthedocs.io/en/latest/index.html>`__ for more detailed documentation of ``deprl``. |
|
|
236 |
|
|
|
237 |
.. note:: |
|
|
238 |
Deprl requires Python `3.9` or newer. |
|
|
239 |
|
|
|
240 |
If you want to load and execute the pre-trained DEP-RL baseline. Make sure that the ``deprl`` package is installed. |
|
|
241 |
|
|
|
242 |
.. code-block:: python |
|
|
243 |
|
|
|
244 |
from myosuite.utils import gym |
|
|
245 |
import deprl |
|
|
246 |
from deprl import env_wrappers |
|
|
247 |
|
|
|
248 |
# we can pass arguments to the environments here |
|
|
249 |
env = gym.make('myoLegWalk-v0', reset_type='random') |
|
|
250 |
env = env_wrappers.GymWrapper(env) |
|
|
251 |
policy = deprl.load_baseline(env) |
|
|
252 |
obs = env.reset() |
|
|
253 |
for i in range(1000): |
|
|
254 |
env.mj_render() |
|
|
255 |
action = policy(obs) |
|
|
256 |
obs, *_ = env.step(action) |
|
|
257 |
env.close() |
|
|
258 |
|
|
|
259 |
.. _load_MyoReflex_baseline: |
|
|
260 |
|
|
|
261 |
Load MyoReflex Baseline |
|
|
262 |
+++++++++++++++++++++++++++++++++++++ |
|
|
263 |
|
|
|
264 |
To load and execute the MyoReflex controller with baseline parameters. |
|
|
265 |
Run the MyoReflex tutorial `here <https://github.com/facebookresearch/myosuite/tree/main/docs/source/tutorials/4b_reflex>`__ |
|
|
266 |
|
|
|
267 |
|
|
|
268 |
.. _customizing_tasks: |
|
|
269 |
|
|
|
270 |
Customizing Tasks |
|
|
271 |
====================== |
|
|
272 |
|
|
|
273 |
In order to create a new customized task, there are two places where you need to act: |
|
|
274 |
|
|
|
275 |
1. Set up a new environment class for the new task |
|
|
276 |
|
|
|
277 |
2. Register the new task |
|
|
278 |
|
|
|
279 |
Set up a new environment |
|
|
280 |
+++++++++++++++++++++++++ |
|
|
281 |
|
|
|
282 |
Environment classes are developed according to the `OpenAI Gym definition <https://gymnasium.farama.org/api/env/>`__ |
|
|
283 |
and contain all the information specific for a task, |
|
|
284 |
to interact with the environment, to observe it and to |
|
|
285 |
act on it. In addition, each environment class contains |
|
|
286 |
a reward function which converts the observation into a |
|
|
287 |
number that establishes how good the observation is with |
|
|
288 |
respect to the task objectives. In order to create a new |
|
|
289 |
task, a new environment class needs to be generated eg. |
|
|
290 |
reach2_v0.py (see for example how `reach_v0.py <https://github.com/MyoHub/myosuite/blob/main/myosuite/envs/myo/myobase/reach_v0.py>`__ is structured). |
|
|
291 |
In this file, it is possible to specify the type of observation (eg. joint angles, velocities, forces), actions (e.g. muscle, motors), goal, and reward. |
|
|
292 |
|
|
|
293 |
|
|
|
294 |
.. code-block:: python |
|
|
295 |
|
|
|
296 |
from myosuite.envs.myo.base_v0 import BaseV0 |
|
|
297 |
|
|
|
298 |
# Class extends Basev0 |
|
|
299 |
class NewReachEnvV0(BaseV0): |
|
|
300 |
.... |
|
|
301 |
|
|
|
302 |
# defines the observation |
|
|
303 |
def get_obs_dict(self, sim): |
|
|
304 |
.... |
|
|
305 |
|
|
|
306 |
# defines the rewards |
|
|
307 |
def get_reward_dict(self, obs_dict): |
|
|
308 |
... |
|
|
309 |
|
|
|
310 |
#reset condition that |
|
|
311 |
def reset(self): |
|
|
312 |
... |
|
|
313 |
|
|
|
314 |
.. _setup_base_class: |
|
|
315 |
|
|
|
316 |
|
|
|
317 |
Register the new environment |
|
|
318 |
++++++++++++++++++++++++++++++ |
|
|
319 |
|
|
|
320 |
Once defined the task `reach2_v0.py`, the new environment needs to be registered to be |
|
|
321 |
visible when importing `myosuite`. This is achieved by introducing the new environment in |
|
|
322 |
the `__init__.py` (called when the library is imported) where the registration routine happens. |
|
|
323 |
The registration of the new enviornment is obtained adding: |
|
|
324 |
|
|
|
325 |
.. code-block:: python |
|
|
326 |
|
|
|
327 |
from gym.envs.registration import register |
|
|
328 |
|
|
|
329 |
register(id='newReachTask-v0', |
|
|
330 |
entry_point='myosuite.envs.myo.myobase.reach_v0:NewReachEnvV0', # where to find the new Environment Class |
|
|
331 |
max_episode_steps=200, # duration of the episode |
|
|
332 |
kwargs={ |
|
|
333 |
'model_path': curr_dir+'/../assets/hand/myohand_pose.xml', # where the xml file of the environment is located |
|
|
334 |
'target_reach_range': {'IFtip': ((0.1, 0.05, 0.20), (0.2, 0.05, 0.20)),}, # this is used in the setup to define the goal e.g. rando position of the team between 0.1 and 0.2 in the x coordinates |
|
|
335 |
'normalize_act': True, # if to use normalized actions using a sigmoid function. |
|
|
336 |
'frame_skip': 5, # collect a sample every 5 iteration step |
|
|
337 |
} |
|
|
338 |
) |
|
|
339 |
|
|
|
340 |
|
|
|
341 |
.. _register_new_environment: |
|
|
342 |
|