0%

四足机器人科研项目工作总结 Month 7 Week 3

本周总结

主要工作总结

  1. 学习 Isaac Gym 官方文档
  2. 将 Go1 导入 Legged Gym 进行测试
  3. 初步了解了 CPGs (Central Pattern Generator)

心得体会

本周四五六由于睡眠质量不佳,同时工作时间比较长,导致状态不佳,效率也较低。周日下午在看 Isaac Gym 时给自己一个小目标——保持专注在 30 分钟内看完 Tensor API 部分,然最后并没有看完,但是能感觉到人的注意力集中了一些,同时整个人的精气神也好一些

所以我想当状态不好时,可以找一些容易完成的任务,给自己一个极短期目标,并且不要太过在意是否完成(能完成更好)。只要结果是自己能比之前更专心的做事情,那便能给自己一些积极的暗示和正向的能量,从而摆脱状态低谷

课余学习

开始阅读《网络是怎样连接的》 by 户根勤 (日)。本书共6章,第1.1章完

2023.07.17

Learning agile and dynamic motor skills for legged robots by Hwangbo, Lee, Hutter

方法简述:

实验形式:

文章亮点:

实物机器人类型:ANYmal

仿真训练平台:

实验数据 / 源代码:

视频:Learning Agile and Dynamic Motor Skills for Legged Robots - YouTube

用户坐标系下的 (单位) 重力方向向量

  1. 3D旋转变换——欧拉角 - 知乎
  2. 三维旋转之欧拉角 - 知乎
欧拉角

俯仰、偏航、滚动分别就是绕z轴、y轴、x轴旋转

将欧拉角表示为 ($\varphi$, $\theta$, $\psi$)【说明:($\varphi$:滚动角 roll,$\theta$:偏航角 yaw,$\psi$:俯仰角 pitch】,那么3D空间一般旋转矩阵M就可以表示为三个旋转矩阵的积:

用户坐标系姿态获取

getBasePositionAndOrientation reports the current position and orientation of the base (or root link) of the body in Cartesian world coordinates. The orientation is a quaternion in [x,y,z,w] format.

getEulerFromQuaternion requires quaternion format [x,y,z,w] and returns a list of 3 floating point values, a vec3. The rotation order is first roll around X, then pitch around Y and finally yaw around Z, as in the ROS URDF rpy convention.

URDF (United Robotics Description Format)

  1. 无处不在的小土 - URDF 和机器人模型 (一))
  2. 无处不在的小土 - URDF 和机器人模型 (二))
  3. URDF学习 1 - 什么是 URDF 以及怎么理解一个 URDF 文件 | Wo看见常威在打来福的博客
  4. 初次了解 URDF | 小白乔学技术的博客
  5. 通用机器人描述格式URDF文件简介与生成 - 知乎
  6. cn/urdf/Tutorials - ROS Wiki
1
check_urdf <urdf_name>.urdf
  1. Nvidia Isaacgym + ETH leggedgym 配置指南 - 知乎
  2. 主页 - LearnOpenGL CN

2023.07.18

早上结石引发腰腹疼痛,下午去医院就诊。

PyBullet Torque Control

Applying torque control in PyByllet makes object fly away from the secene - Stack Overflow

Python Examples of p.TORQUE_CONTROL

官方文档 PyBullet Quickstart Guide - setJointMotorControl2/Array 中写道

We can control a robot by setting a desired control mode for one or more joint motors. During the stepSimulation the physics engine will simulate the motors to reach the given target value that can be reached within the maximum motor forces and other constraints.
Important Note: by default, each revolute joint and prismatic joint is motorized using a velocity motor. You can disable those default motor by using a maximum force of 0. This will let you perform torque control.

1
2
3
maxForce = 0
mode = p.VELOCITY_CONTROL
p.setJointMotorControl2(objUid, jointIndex, controlMode=mode, force=maxForce)

Actuator Network

Actuator Network Training · Issue #2 · Improbable-AI/walk-these-ways · GitHub


2023.07.19

Python 新知

pybullet.setAdditionalSearchPath()

pybullet_data.getDataPath()

os.path.join()

os.path.dirname()

开会

Quadruped — isaacsim 2022.2.1 documentation

Omniverse Launcher 安装

失败及原因

AppImages 是一个文件系统,需要 FUSE 版本为 2 才能运行,但是 Ubuntu 22.04 的发行版本没有对其进行原始的配置的安装,重新安装并且配置即可

image-20230719213627441

解决方法
  1. Ubuntu 22.04 解决使用 .AppImage 文件方法_ubuntu 打开 appimage | splendid.rain生的博客

  2. Ubuntu 升级到22.04之后,之前的 AppImage 点击不能运行了

    其中不建议在 Ubuntu 版本 $\ge$ 22.04 时安装 fuse

1
2
3
4
5
sudo apt install fuse libfuse2
sudo modprobe fuse
sudo groupadd fuse
user="$(whoami)"
sudo usermod -a -G fuse $user

或者

1
2
sudo add-apt-repository universe   
sudo apt install libfuse2
安装 Isaac Sim

Isaac Sim探索 |(一)安装 Omniverse 及 Isaac Sim - 知乎

  1. os.path.join() 函数用法 | MclarenSenna 的博客
  2. python 中的os.path.dirname与os.path.dirname(__file__)的用法

2023.07.20

模型 / 仿真环境验证

Quadruped Joint Names

Each of HyQ’s legs has three active rotational degrees of freedom (DOF): the hip abduction/adduction (HAA) joint, the hip flexion/extension (HFE) joint, and the knee flexion/extension joint (KFE), as depicted in Fig. 2(b). More details on the robot design, kinematics and dimensions can be found in [8]. All the joints are actuated by high-speed servovalves connected to hydraulic asymmetric cylinders (HFE and KFE) and semi-rotary vane actuators (HAA).

image-20230720092811881

文件修改

修改 URDF

JOINT / LINK NAME Anymal A1 Go1
左前 / 左后 / 右前 / 右后 LF / LH / RF / RH FL / FR / RF / RR FL / FR / RF / RR
髋关节 HAA hip_joint hip_joint
大腿关节 HFE thigh_joint thigh_joint
小腿关节 KFE calf_joint calf_joint

修改 task 配置文件 <task>.yaml

修改强化学习配置文件 <task>PPO.yaml

如何指定对应 action space 的12 个关节

如何构建一个 IsaacGymEnvs 任务

资料查阅

阅读 framework.md 的 Creating a New Task

使用 Isaac Gym 来强化学习mycobot 抓取任务 | 电子发烧友网

pYYBAGQ09mmAM_idAAIYDAptQek744.png

Isaac-gym(9):项目更新、benchmarks框架梳理 | hongliyu_lvliyu的博客

在这里插入图片描述

构建方法

构建一个 Task 需要完善以下三个部分:

  • 主程序:<TaskName>.py(主体程序,包含程序主要设计,环境生成,奖励函数,控制模块等等)
  • 强化学习算法配置文件:<TaskName>.yaml(位于~/isaacgymenvs/cfg/train
  • task 参数文件:<TaskName>PPO.yaml(位于~/isaacgymenvs/cfg/task

训练后,生成的模型文件位于~/isaacgymenvs/runs/<TaskName>

文件夹内包含nn,summaries.config.yaml三个文件,其中config.yaml为本次训练的环境参数(env)、仿真参数(sim)、训练参数(train)等

Creating a New Task

使用 Isaac Gym 的 RL framework 构建一个 Task 首要是先在 isaacgymenvs/tasks 中创建一个新的脚本文件:首先 import 必要的库

1
2
3
4
from isaacgym import gymtorch
from isaacgym import gymapi

from .base.vec_task import VecTask

接着创建一个继承 VecTask 的类 MyNewTask

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class MyNewTask(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_devices, headless,
virtual_screen_capture, force_render):
...
super().__init__(cfg=config_dict)
# 初始化 DOF 状态张量
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)

# 需要重写如下方法 create_sim, pre_physics_step, post_physics_step
def create_sim(self):
# implement sim set up and environment creation here
# - set up-axis
# - call super().create_sim with device args (see docstring)
# - create ground plane
# - set up environments
def pre_physics_step(self, actions):
# implement pre-physics simulation code here
# - e.g. apply actions
def post_physics_step(self):
# implement post-physics simulation code here
# - e.g. compute reward, compute observations

为了在 train.py 运行任务,需要在 isaacgymenvs/tasks__init__.py 中添加如下代码:

1
2
3
4
5
6
7
from isaacgymenvs.tasks.my_new_task import MyNewTask
...
isaac_gym_task_map = {
'Anymal': Anymal,
# ...
'MyNewTask': MyNewTask,
}

最后创建相关的环境参数配置文件和强化学习配置文件即可

Isaac Gym

RL | hongliyu_lvliyu的博客

Isaac-gym(2): 官方文档之 examples_hongliyu_lvliyu的博客-CSDN博客

Isaac-gym(3): 官方文档——programming 之仿真设置_gym官方文档 | hongliyu_lvliyu的博客

Isaac-gym(4): 物理模拟 isaac gym 可以隐藏关节吗 | hongliyu_lvliyu的博客

Isaac Gym Environments for Legged Robots - ETH Zurich $*$

Code Structure $*$
  1. Each environment is defined by an env file (legged_robot.py) and a config file (legged_robot_config.py). The config file contains two classes: one conatianing all the environment parameters (LeggedRobotCfg) and one for the training parameters (LeggedRobotCfgPPo).
  2. Both env and config classes use inheritance.
  3. Each non-zero reward scale specified in cfg will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
  4. Tasks must be registered using task_registry.register(name, EnvClass, EnvConfig, TrainConfig). This is done in envs/__init__.py, but can also be done from outside of this repository.
Usage
  1. Train:

    1
    python issacgym_anymal/scripts/train.py --task=anymal_c_flat
    • To run on CPU add following arguments: --sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).
    • To run headless (no rendering) add --headless.
    • Important: To improve performance, once the training starts press v to stop the rendering. You can then enable it later to check the progress.
    • The trained policy is saved in issacgym_anymal/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt. Where <experiment_name> and <run_name> are defined in the train config.
    • The following command line arguments override the values set in the config files:
      • —task TASK: Task name.
      • —resume: Resume training from a checkpoint
      • —experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
      • —run_name RUN_NAME: Name of the run.
      • —load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
      • —checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
      • —num_envs NUM_ENVS: Number of environments to create.
  2. Play a trained policy:

    1
    python issacgym_anymal/scripts/play.py --task=anymal_c_flat
    • By default the loaded policy is the last model of the last run of the experiment folder.
    • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.
Adding a new environment $*$

The base environment legged_robot implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and no reward scales.

  1. Add a new folder to envs/ with '<your_env>_config.py, which inherit from an existing environment cfgs
  2. If adding a new robot:
    • Add the corresponding assets to resourses/.
    • In cfg set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg and the name of the environment (python class).
    • In train_cfg set experiment_name and run_name
  3. (If needed) implement your environment in .py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
  4. Register your env in isaacgym_anymal/envs/__init__.py.
  5. Modify/Tune other parameters in your cfg, cfg_train as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!

2023.07.21

Isaac Gym Environments for Legged Robots $*$ train.py

step(self, action) 结构
  1. self.actions [clip]

  2. self.render()

  3. for _ in range(decimation)

    • self.torques = self._compute_torques(self.actions)

    • self.gym.set_dof_actuation_force_tensor(...,...(self.torques))

    • self.gym.simulate()

    • self.gym.refresh_dof_state_tensor()

  4. self.post_physics_step() 检查 termination,计算 observations 和 rewards

    • self.gym.refresh_actor_root_state_tensor()

    • self.gym.refresh_net_contact_force_tensor()

    • 计数器自增:episode_length_buf, common_step_counter

    • 获取一些变量:base_quat, base_lin_vel, base_ang_vel, projected_gravity

    • self._post_physics_step_callback() 在计算 terminations, observations 和 rewards 之前调用,根据目标和航向计算角速度命令,计算测量的地形高度并给机器人随机推力

      • _resample_commands() 一些环境选择随机指令 [env_ids]
      • self.measured_heights = self._get_heights()
      • self._push_robots() (common_step_counter % push_interval == 0)
    • self.check_termination() 检查环境是否需要 reset

      • self.reset_buf
      • self.time_out_buf [episode_length_buf]
    • self.compute_reward() 计算奖励 [self.reward_functions]

    • self.reset_idx() 重置部分环境 [env_ids]

      • 更新难度:
        • self._update_terrain_curriculum()
        • self._update_command_curriculum()
      • 重置机器人:
        • self._reset_dofs()
        • self._reset_root_states()
      • 随机指令:_resample_commands()
      • 重置 buffers:last_actions, last_dof_vel, feet_air_time, episode_length_buf, reset_buf
    • self.compute_observations() 计算观测空间

    • 更新一些变量:last_actions, last_dof_vel, last_root_vel

  5. self.obs_buf [clip]

  6. return self.obs_buf, self.rew_buf, self.reset_buf, self.extras
create_sim(self) 结构

设置self.up_axis_idx

仿真环境 self.sim = self.gym.create_sim()

创建地形

  • Terrain
  • self._create_ground_plane()
  • self._create_heightfield()
  • self._create_trimesh()

创建环境:self._create_envs()

  1. loads the robot URDF/MJCF asset
  2. For each environment
    • creates the environment
    • calls DOF and Rigid shape properties callbacks
    • create actor with these properties and add them to the env
  3. Store indices of different bodies of the robot
  • asset_path, asset_root, asset_file
  • asset_options = gymapi.AssetOptions()
  • self.gym.load_asset()

Isaac Gym

阅读官方文档 (Isaac Gym 的 docs 文件夹中) 以及 RL Isaac Gym | hongliyu_lvliyu 的博客

Simulation Setup

The core API, including supporting data types and constants, is defined in the gymapi module.

The gym object by itself doesn’t do very much. It only serves as a proxy for the Gym API.

The sim object contains physics and graphics contexts that will allow you to load assets, create environments, and interact with the simulation.

Physics Simulation

An actor is an instance of a GymAsset. The function create_actor adds an actor to an environment and returns an actor handle that can be used to interact with that actor later. For performance reasons, it is a good practice to save the handles during actor creation rather than looking them up every time while the simulation is running.

Each actor has an array of rigid bodies, joints, and DOFs.

Fixed, revolute, and prismatic joints are well-tested and fully supported.

Each degree of freedom can be independently actuated.

Controlling actors is done using the degrees-of-freedom. For each DOF, you can set the drive mode, limits, stiffness, damping, and targets. You can set these values per actor and override the default settings loaded from the asset.

DOF property arrays can be accessed for assets (get_asset_dof_properties) and individual actors (get_actor_dof_properties/set_actor_dof_properties). The getters return structured Numpy arrays with the following fields: hasLimits, lower, upper, driveMode, stiffness, damping, velocity,

effort, frictionarmature

Note that DOF states do not include the pose or velocity of the root rigid body, so they don’t fully capture the actor state.

Tensor API

The Gym tensor API uses GPU-compatible data representations for interacting with simulations. It allows accessing the physics state directly on the GPU without copying data back and forth from the host. It also supports applying controls using tensors, which makes it possible to set up experiments that run fully on the GPU.

Tensors are well-established data structures for storing GPU-compatible data. Popular frameworks like PyTorch and TensorFlow support tensors as a core feature. The Gym tensor API is independent of other frameworks, but it is designed to be easily compatible with them. The Gym tensor API uses simple tensor descriptors, which specify the device, memory address, data type, and shape of a tensor. There is no special API for manipulating the data in the Gym tensors. Instead, the tensor descriptors can be converted to more usable tensor types, like PyTorch tensors, using interop utilities. Once a Gym tensor is “wrapped” in a PyTorch tensor, you can use all of the existing PyTorch utilities to work with the contents of the tensor.

To use GPU tensors, you must set the use_gpu_pipeline flag to True in the SimParams used to create the simulation. Also, you should configure PhysX to use the GPU.

Finally, after all the environments are fully set up, you must call prepare_sim to initialize the internal data structures used by the tensor API.

PyTorch ?

PyTorch 介绍以及基本使用、深入了解、案例分析 | _ㄣ知冷煖★的博客


2023.07.22

Isaac Gym - Tensor API

Simulation Setup
1
gym = gymapi.acquire_gym()

The tensor API is currently available with PhysX only

To use GPU tensors, you must set the use_gpu_pipeline flag to True in the SimParams used to create the simulation. Also, you should configure PhysX to use the GPU:

1
2
3
4
5
6
sim_params = gymapi.SimParams()
...
sim_params.use_gpu_pipeline = True # 是否完全在 GPU 上进行
sim_params.physx.use_gpu = True # PHYSX simulation 部分是否在 GPU 上进行

sim = gym.create_sim(compute_device_id, graphics_device_id, gymapi.SIM_PHYSX, sim_params)

Finally, after all the environments are fully set up, you must call prepare_sim to initialize the internal data structures used by the tensor API:

1
2
3
# ...create sim, envs, and actors here...

gym.prepare_sim()
Physics State

After calling prepare_sim, you can acquire the physics state tensors. These tensors represent a cache of the simulation state in an easy-to-use format. It is important to note that these tensors hold a copy of the simulation state. They are not the same data structures as used by the underlying physics engine.

Actor Root State Tensor

A Gym actor can consists of one or more rigid bodies. All actors have a root body. The root state tensor holds the state of all the actor root bodies in the simulation.

To acquire the root state tensor:

1
2
3
4
5
6
# A generic tensor descriptor (not very useful, later wrap it in a PyTorch Tensor obj)
_root_tensor = gym.acquire_actor_root_state_tensor(sim)

# In order to access the contents of the tensor, you can wrap it in a PyTorch Tensor object, using the provided gymtorch interop module
root_tensor = gymtorch.wrap_tensor(_root_tensor)
# # The shape of this tensor is (num_actors, 13)

gym 获取的 tensorGym Tensor,需要利用 gymtorch 库的 wrap_tensor 将其转换为 Pytorch Tensor,进而在 PyTorch 中使用它。逆过程通过 gymtorch.unwrap_tensor 实现

This function will fill the tensor with the latest values from the physics engine. All the views or slices you created from this tensor will update automatically, since they all refer to the same memory buffer. Generally, you’ll want to do this after each call to gym.simulate.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root_positions = root_tensor[:, 0:3]
root_orientations = root_tensor[:, 3:7]
root_linvels = root_tensor[:, 7:10]
root_angvels = root_tensor[:, 10:13]

# main simulation loop
while True:
# step the physics simulation
gym.simulate(sim)

# refresh the state tensors
gym.refresh_actor_root_state_tensor(sim)

# ...use the latest state tensors here...

As a contrived example, suppose you want to raise all the actors by one unit along the y-axis. You could modify the root positions like this:

1
2
3
4
5
# modify the root state tensor in-place
offsets = torch.tensor([0, 1, 0]).repeat(num_actors)
root_positions += offsets # 修改 PyTorch Tensor

gym.set_actor_root_state_tensor(sim, _root_tensor) # 将 Gym Tensor 作为输入

Another example is doing a periodic reset of actor roots, which would teleport them to their original locations once every 100 steps:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# acquire root state tensor descriptor
_root_tensor = gym.acquire_actor_root_state_tensor(sim)

# wrap it in a PyTorch Tensor
root_tensor = gymtorch.wrap_tensor(_root_tensor)

# save a copy of the original root states
saved_root_tensor = root_tensor.clone()

step = 0

# main simulation loop
while True:
# step the physics simulation
gym.simulate(sim)

step += 1

if step % 100 == 0:
gym.set_actor_root_state_tensor(sim, gymtorch.unwrap_tensor(saved_root_tensor))
Degrees-of-Freedom

The state of each DOF is represented using two 32-bit floats, the DOF position and DOF velocity. For prismatic (translation) DOFs, the position is in meters and the velocity is in meters per second. For revolute (rotation) DOFs, the position is in radians and the velocity is in radians per second.

The DOF state tensor contains the state of all DOFs in the simulation. The shape of the tensor is (num_dofs, 2). The total number of DOFs can be obtained by calling gym.get_sim_dof_count(sim). The DOF states are laid out sequentially. The tensor begins with all the DOFs of actor 0, followed by all the DOFs of actor 1, and so on. The ordering of DOFs for each actor is the same as with the functions get_actor_dof_states and set_actor_dof_states.

All Rigid Body States

The rigid body state tensor contains the state of all rigid bodies in the simulation. The state of each rigid body is the same as described for the root state tensor - 13 floats capturing the position, orientation, linear velocity, and angular velocity. The shape of the rigid body state tensor is (num_rigid_bodies, 13). The total number of rigid bodies in a simulation can be obtained by calling gym.get_sim_rigid_body_count(sim). The rigid body states are laid out sequentially. The tensor begins with all the bodies of actor 0, followed by all the bodies of actor 1, and so on. The ordering of bodies for each actor is the same as with the functions get_actor_rigid_body_states and set_actor_rigid_body_states.

Control Tensors

The various state tensors (root state tensor, DOF state tensor, and rigid body state tensor) are useful for getting information about actors and setting new poses and velocities instantaneously. Setting states this way is appropriate during resets, when actors need to return to their original pose or restart a task using new initial conditions. However, setting new states directly using those tensors should be done sparingly.

To manage actor behavior during simulation, you can apply DOF forces or PD controls using the following API.

1
2
3
4
5
6
7
8
# get total number of DOFs
num_dofs = gym.get_sim_dof_count(sim)

# generate a PyTorch tensor with a random force for each DOF
actions = 1.0 - 2.0 * torch.rand(num_dofs, dtype=torch.float32, device="cuda:0")

# apply the forces
gym.set_dof_actuation_force_tensor(sim, gymtorch.unwrap_tensor(actions))

Go 1 模型导入 legged_gym

修改 Go 1 模型

使用 PyCharm 比较对象 功能,比较 legged_gym 中 resources 文件夹中的 a1.urdf 和 MICRO_Quadruped_ARCHIVE 中 Robots 文件夹中的 Go1.urdf

Go1.urdf 文件进行如下修改

  1. 增加 basefloating base 部分
  2. 对于 <joint name="**_foot_fixed type="fixed">,增加 dont_collapse="true"
  3. 删除 transmission 部分
部署到实机上所关心的问题
  1. observation space 以及单位(如果有的话)
  2. action space 以及单位(如果有的话)
  3. default_dof_pos 以及顺序
  4. 如何将 action 转化为 torque
  5. 进行了哪些 scale 和 clip
  6. $K_p$ (stiffness),$K_d$ (damping)
小结
  1. observation space (48)
    • base_lin_vel (3) * lin_vel_scale[2.0]
    • base_ang_vel (3) * ang_vel_scale [2.0]
    • projected_gravity (3) [单位化]
    • commands (3) * [lin_vel_scale, lin_vel_scale, ang_vel_scale]
    • dof_pos - default_dof_pos (12)
    • dof_vel (12) * dof_vel_scale [0.05]
    • actions (12)
  2. action space (12)
  3. self.dof_names:
    • hip / thigh / calf
    • FL / FR / RL / RR
    • FL hip / thigh / calf | FR hip / thigh / calf | RL hip / thigh / calf | RR hip / thigh / calf
  4. Position Mode:
    • 下式中的 actions 已经进行了 clip
    • torque = $K_p$ (actionsaction_scale + default_dof_pos - dof_pos) - $K_d$ dof_vel
  5. self.default_dof_pos = $[0.1, 0.65, -1.25, -0.1, 0.65, -1.25, 0.1, 0.65, -1.25, -0.1, 0.65, -1.25]$
  6. $K_p$ = 100, $K_d$ = 2
  1. 中枢模式发生器Central pattern generators (CPGs) | 知乎
  2. Central pattern generators(CPG)模型 | cheetaher Blog
  3. Central pattern generators for locomotion control in animals and robots: A review

2023.07.23

Isaac Gym - Tensor API

为了书写的连续性,将此部分内容并入昨天的同名章节中

常用 Gym API
Actor (root_state) 篇

gym.get_sim_actor_count(sim)

gym.get_actor_index(env, actor_handle, gymapi.DOMAIN_SIM)

gym.acquire_actor_root_state_tensor(sim)

gym.refresh_actor_root_state_tensor(sim)

gym.set_actor_root_state_tensor(sim, _root_states)

gym.set_actor_root_state_tensor_indexed()

DOF 篇

gym.get_sim_dof_count(sim)

gym.acquire_dof_state_tensor(sim)

gym.refresh_dof_state_tensor(sim)

gym.set_dof_state_tensor(sim, _dof_states)

gym.set_dof_state_tensor_indexed(sim, _dof_state, gymtorch.unwrap_tensor(actor_indices), 3)

gym.get_actor_dof_count(env, actor)

gym.get_actor_dof_statesgym.set_actor_dof_states

gym.get_actor_dof_propertiesgym.set_actor_dof_properties

Rigid Body 篇

gym.get_sim_rigid_body_count(sim)

gym.get_actor_rigid_body_count(env, actor)

gym.get_actor_rigid_body_statesgym.set_actor_rigid_body_states

gym.get_sim_rigid_body_statesgym.set_sim_rigid_body_states

gym.get_env_rigid_body_statesgym.set_env_rigid_body_states

DOF Controls 篇

gym.set_dof_actuation_force_tensor(sim, gymtorch.unwrap_tensor(actions))

gym.set_dof_actuation_force_tensor_indexed()

gym.set_dof_position_target_tensor()

gym.set_dof_position_target_tensor_indexed()

gym.set_dof_velocity_target_tensor()

gym.set_dof_velocity_target_tensor_indexed()

gym.set_actor_dof_position_targets()

gym.set_actor_dof_velocity_targets()

gym.apply_actor_dof_efforts()

Body Forces 篇

gym.apply_rigid_body_force_tensors(sim, force_tensor, torque_tensor, gymapi.ENV_SPACE)

gym.apply_rigid_body_force_tensors(sim, force_tensor, None, gymapi.ENV_SPACE)

gym.apply_rigid_body_force_tensors(sim, None, torque_tensor, gymapi.ENV_SPACE)

gym.apply_rigid_body_force_at_pos_tensors(sim, force_tensor, pos_tensor, gymapi.ENV...)

常用 gymapi

gymapi.DOF_MODE_POS

gymapi.DOF_MODE_VEL

gymapi.DOF_MODE_EFFORT

  1. 接触力、摩擦力、压力、张力和弹力