Diff of /docs/source/usage.rst [000000] .. [9f010e]

Switch to unified view

a b/docs/source/usage.rst
1
Usage
2
=====
3
4
.. _basic_usage:
5
6
Basic Usage
7
-----------
8
9
1. To train the controller, run the following in terminal:
10
11
   ``python append_musculo_targets.py``
12
13
   ``python find_init_pose.py --config configs/configs.txt``
14
15
   ``python main.py --config configs/configs.txt``
16
17
   This will save the controller in the ./checkpoint file with training iterations. The highest reward should reach >= 55000 for kinematic accuracy.
18
19
   The episode reward with iterations should look like this:
20
   (There may be slight variations due to random seed but trend should look similar)
21
22
.. _general_usage:
23
24
General Usage
25
-------------
26
27
Musculoskeletal Model
28
~~~~~~~~~~~~~~~~~~~~~
29
30
1. Musculoskeletal Model: Save the MuJoCo musculoskeletal model in “./musculoskeletal_model/” as musculoskeletal_model.xml alongwith the Geometry files
31
32
(The path to the musculoskeletal_model.xml can also be specified in the configs.txt file with *musculoskeletal_model_path* param, if not using the above default path)
33
34
2. For conversion of musculoskeletal model from OpenSim to MuJoCo, please refer to MyoConverter: https://github.com/MyoHub/myoconverter
35
36
Experimental Kinematics
37
~~~~~~~~~~~~~~~~~~~~~~~
38
39
1. Save the experimental kinematics in ``./kinematics_data/kinematics.pkl`` as a Python dict object with the following format
40
41
.. code-block::
42
43
    
44
   dict{
45
    
46
      <'marker_names'> : <['marker_name_1', ..., 'marker_name_n']>,
47
48
      <'train'> : <dict_train>,
49
50
      <'test'> : <dict_test>
51
52
   }
53
54
2. ``<marker_names>`` contain a list of names of the experimental markers that were recorded. The marker_name must correspond to a body name in the musculoskeletal model xml file. 
55
56
3. ``<dict_train>`` and ``<dict_test>`` are Python dictionary objects that contain kinematics in the following format
57
58
.. code-block::
59
60
61
   { 
62
63
      <key> :   <value>,
64
65
      <key>:   <value>,
66
67
      .
68
      .
69
      .
70
71
      <key>: <value>
72
73
   }
74
75
``<key: int>`` is the integer index of the corresponding condition. (starts from 0 for the first condition for both the training and testing conditions) 
76
77
``<value: numpy.ndarray>`` contains the kinematics for the corresponding condition with shape: ``[num_markers/targets, num_coordinates = 3, timepoints]``. 
78
79
``num_markers`` are the number of experimental markers/bodies that are recorded. The order of the num_markers must correspond to the order in which the marker_names are listed. For example, if ``marker_names = [hand, elbow]``, ``num_marker= 0`` should contain the experimental kinematics for hand and ``num_marker=1`` should contain the experimental kinematics for elbow.
80
81
num_coordinates are the x [-->], y[↑] and z[ out of page] coordinates. Values of NaN for any coordinate will keep that coordinate locked. 
82
83
An example for saving the experimental kinematics for the cycling task is given in ``./exp_kin_cycling/saving_exp_kin.ipynb``
84
85
(The path to kinematics.pkl file can also be specified using *kinematics_path* param in configs.txt file) 
86
87
Neural Data (optional)
88
~~~~~~~~~~~~~~~~~~~~~~
89
90
1. Save the recorded neural data for the training and testing conditions in ‘./nusim_neural_data/neural_activity.pkl’ as a Python dict object
91
92
.. code-block::
93
94
95
   dict{
96
97
      <'train'> : <dict_train>,
98
99
      <'test'> : <dict_test>
100
101
   }
102
103
2. ``<dict_train>`` and ``<dict_test>`` are Python dictionary objects that contain the neural data in the following format:
104
105
   ``<key: int>`` is the integer index of the corresponding condition as in the kinematics file.
106
107
   ``<value: numpy.ndarray>`` is the numpy array that contains recorded firing rates with the following shape: ``[timepoints, num_neurons]``. num_neurons are the total number of recorded neurons.
108
109
.. note::
110
111
   If this step is omitted, various post-processing analyses which require recorded neural data such as CCA, will not run. nuSim training will also not proceed (nusim_data_path can also be specified in the configs.txt file).
112
113
Stimulus Data (optional)
114
~~~~~~~~~~~~~~~~~~~~~~~~
115
116
Provide any experimental stimulus data in ``./stimulus_data/stimulus_data.pkl`` as a Python dict object:: 
117
118
   dict{
119
120
      <'train'> : <dict_train>,
121
122
      <'test'> : <dict_test>
123
124
   }
125
126
1. ``<dict_train>`` and ``<dict_test>`` are Python dictionary objects that contain the experimental stimulus data in the following format:
127
128
   ``<key: int>`` is the integer index of the corresponding condition as in the kinematics file.
129
130
   ``<value: numpy.ndarray>`` is the numpy array that contains recorded stimulus data with the following shape: ``[timepoints, num_features]``. num_features are the corresponding features in that stimulus.
131
132
Initial Pose (optional)
133
~~~~~~~~~~~~~~~~~~~~~~~
134
135
Save the initial pose (containing the qpos and qvel) as numpy arrays in ``./inital_pose/`` as qpos.npy and qvel.npy with shape ``[nq, ]``. nq is the number of joints in the xml model.
136
137
This step is optional. If omitted, the default initial pose for xml model will be used for CMA-ES and IK.
138
139
(initial_pose_path can also be specified in the configs.txt file)
140
141
Specifications
142
--------------
143
144
Provide the parameters for various modules using the ‘./configs/configs.txt’ file. The details of each parameter/specification is given in the configs.txt file.
145
146
Inverse Kinematics
147
~~~~~~~~~~~~~~~~~~
148
149
1. **Append the xml model with targets:**
150
151
   Run:
152
153
   ``python append_musculo_targets.py``
154
155
   This will append targets to the musculoskeletal xml file that will follow the preprocessed markers kinematics during simulation.
156
157
2. **Find the initial pose for xml model using CMA-ES and Inverse Kinematics:**
158
159
   a. Run the following command in the terminal:
160
161
      ``python find_init_pose.py --config configs/configs.txt --visualize True``
162
163
      This will use inverse kinematics (IK) to find the initial pose for the xml model to match the initial timepoint of the target kinematics.
164
165
      If you see the output, ‘Initial Pose found and saved’, skip 1b.
166
167
   b. Run:
168
169
      ``python find_init_pose_ik_cma.py --config configs/configs.txt --visualize True``
170
171
      This will use CMA-ES optimization with IK to find a good initial pose for the xml model. 
172
173
      If you see, ‘Initial Pose found and saved using CMA-ES and Inverse Kinematics’, proceed to the next step. 
174
    
175
      Otherwise, provide a good inital pose for the xml model that preferably starts nearer to the inital marker/target position.
176
    
177
3. **Visualize the targets/markers trajectories using randomly initialized uSim network:**
178
179
   Run
180
181
   ``python main --config configs/configs.txt --visualize True --mode test``
182
183
   This will visualize the target trajectories using a randomly initialized uSim controller network. Make sure target trajectories look as desired. Otherwise, change the kinematics preprocessing parameters (e.g. trajectory_scaling, center) in the ./configs/configs.txt file.
184
185
4. **Visualize the musculoskeletal model trajectory and save the corresponding sensory feedback:**
186
187
   Run:
188
189
   ``python visualize_trajectories_ik.py --config configs/configs.txt --visualize True``
190
    
191
    
192
   This will visualize the xml model following/tracking the training target trajectories. Before proceeding, make sure that the target trajectories are feasible and lie within the bounds of the xml model. Otherwise, adjust the target trajectories using the kinematics preprocessing parameters in configs.txt file. 
193
   This will also save the generated sensory feedback in ``./test_data/sensory_feedback_ik.pkl`` as Python dict object: 
194
195
   ``<key: int>`` corresponds to the integer index of the corresponding training condition
196
   ``<value: numpy.ndarray>`` with shape: ``[timepoints, num_of_state_feedback_variables]``
197
198
This can be used to get Proprioception for training neural networks.
199
200
Training the uSim Controller using DRL
201
--------------------------------------
202
203
**(Make sure DRL/SAC related parameters are specified correctly in the configs.txt file)**
204
205
1. To train the uSim controller using the provided DRL algorithm, run:
206
207
   ``python main.py --config configs/configs.txt``
208
    
209
2. To continue the training from the previous session, run:
210
211
   ``python main.py --config configs/configs.txt --load_saved_nets_for_training True``
212
213
Testing the uSim Controller
214
---------------------------
215
216
To test the trained uSim controller, run:
217
218
   ``python main.py --config configs/configs.txt --mode test --visualize True``
219
220
This will visualize the xml model performing movements for training and testing conditions using the trained uSim controller. 
221
222
This will also save the files used for post training analyses.
223
224
Post Training Analyses
225
----------------------
226
227
After training, the following modules are used for various analyses. All these modules are in ‘./Analysis’
228
229
1. **Kinematics Visualization:**
230
231
   To visualize the kinematics for the training and testing conditions, see visualize_kinematics.ipynb
232
233
2. **PCA:**
234
235
   To visualize the uSim controller’s population trajectories in PCA subspace, run:
236
237
   ``python collective_pca.py``
238
239
3. **Canonical Correlation Analysis (CCA):**
240
241
   see CCA.ipynb
242
243
4. **Linear Regression Analysis (LRA):**
244
245
   see LRA.ipynb
246
247
5. **Procrustes:**
248
249
   see procrustes.ipynb
250
251
6. **Fixed Point (FP) Analysis:**
252
253
   Clone the fixed-point-finder in ./Analysis, https://github.com/mattgolub/fixed-point-finder
254
255
   Run
256
257
   ``python find_fp.py``
258
259
   The fixed point analysis is based on the original implementation: https://github.com/mattgolub/fixed-point-finder. Refer to the github repo for further information.
260
261
7. **Rotational Dynamics: (requires MATLAB)**
262
263
   See and run jpca_nusim.m
264
265
.. note:: 
266
267
   jPCA analysis is based on MM Churchland’s original implementation. Please see it for further details (https://www.dropbox.com/scl/fo/duf5zbwcibsux467c6oc9/AIN-ZiFsy2Huyh8h7VMdL7g?rlkey=3o5axmq5hirel4cij7g64jc0r&e=1&dl=0)
268
269
**Important for jPCA analysis:**
270
271
1. Make sure that ./Analyses/jPCA_ForDistribution is included in the MATLAB path alongwith all sub-directories
272
273
2. Make sure that usim test_data folder is included in the MATLAB path. test_data folder is where the jpca data is saved during usim test
274
275
Perturbation Analyses
276
---------------------
277
278
Selective Feedback Elimination (SFE)
279
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
280
281
Specify the part of the sensory feedback to be eliminated in ./SAC/perturbation_specs.py using *sf_elim* variable. Run:
282
283
   ``python main --config configs/configs.txt --mode SFE --visualize True``
284
285
Sensory Perturbation
286
~~~~~~~~~~~~~~~~~~~~
287
288
Specify the perturbation vector to be added to the selected sensory feedback in ./SAC/perturbation_specs.py, e.g. *muscle_lengths_pert*. Run:
289
290
   ``python main.py --config configs/configs.txt --mode sensory_pert --visualize True``
291
292
Neural Perturbation
293
~~~~~~~~~~~~~~~~~~~
294
295
The neural perturbation will add the given perturbation to the nodes of the uSim/nuSim controller’s RNN.
296
297
Specify the neural perturbation vector in perturbation_specs.py using *neural_pert* variable. Run:
298
299
   ``python main.py --config configs/configs.txt --mode neural_pert --visualize True``             
300
301
Change Musculoskeletal Properties
302
---------------------------------
303
304
To test the trained uSim controller under changed musculoskeletal properties:
305
306
1. Go to the folder ‘./musculoskeletal_model/’. Copy and paste the xml model ‘musculo_targets.xml’. Rename the copied model as ‘musculo_targets_pert.xml’.
307
308
2. Change the desired musculoskeletal properties in xml model ‘musculo_targets_pert.xml’.
309
310
3. Run:
311
312
   ``python main.py --config configs/configs.txt --mode musculo_properties --visualize True``
313
314
All the above perturbation analyses will change the post training analyses files in place. To run the post training analyses after perturbation see Post Training Analyses section.
315