Programs which give controls to you for 6 (or more) axis moves, through motion capture (camera or IMU) or Bluetooth keyboard
Anthropic's assistant, "Haiku" suggests numerous such programs:
Q: Lots of apps (such as https://play.google.com/store/apps/details?id=cc.skool.arm
and https://play.google.com/store/apps/details?id=appinventor.ai_makerslabfablab.Arm_Robot_Control
) now allow you to use touch (or Bluetooth keyboard) to do more tha 3-axis movement (such as 6-axis robot arms). Those two apps attach to actual robotic tools, and there was a "ZenBox" app on Google Store which was all virtual (you used touch or Bluetooth keyboard to control a 6-axis robot arm and perform numerous tasks). Now, that old "ZenBox" (perhaps misremembered its moniker?) app is not found, but it is an example of what some wish for with avatars. Are there sims which allow thus? How much does World of Warcraft allow you to script motions?
Lots of apps (such as https://play.google.com/store/apps/details?id=cc.skool.arm and https://play.google.com/store/apps/details?id=appinventor.ai_makerslabfablab.Arm_Robot_Control) now allow you to use touch (or Bluetooth keyboard) to do more tha 3-axis movement (such as 6-axis robot arms). Those two apps attach to actual robotic tools, and there was a "ZenBox" app on Google Store which was all virtual (you used touch or Bluetooth keyboard to control a 6-axis robot arm and perform numerous tasks). Now, that old "ZenBox" (perhaps misremembered its moniker?) app is not found, but it is an example of some (those of us agreeable to Zashunina?) wish for with avatars. Are there sims which allow thus? How much does World of Warcraft allow you to script motions?
Answer:
Based on the information provided, it seems you are interested in the capabilities of various apps and simulators that allow for the control of robotic arms and the potential parallels to enhanced avatar control in virtual worlds like World of Warcraft.
Regarding the specific apps you mentioned:
Arm Robot Control Makerslab:
This app is designed to control robotic arms equipped with Bluetooth modules and Arduino boards. It provides a range of commands to manipulate different parts of the robotic arm, such as the gripper, wrist, elbow, shoulder, and base.
The app allows the user to control the various movements of the robotic arm through simple letter commands, as well as adjust the speed of the movements.
Robotic ARM by Hacktronics:
This app is intended to control a 5-DOF (degrees of freedom) robotic arm sold by Hacktronics India.
The app provides the ability to control the movement of the robotic arm and save multiple positions to create a program.
These types of apps demonstrate the increasing capabilities and accessibility of tools that allow users to directly control and manipulate robotic systems through intuitive touch-based or Bluetooth keyboard interfaces.
Regarding your question about the "ZenBox" app and its similarities to enhanced avatar control in virtual worlds:
While I could not find a specific "ZenBox" app on the Google Play Store, the concept you described of a virtual 6-axis robot arm that can be controlled through touch or Bluetooth keyboard inputs is quite interesting.
This type of virtual robotic simulation aligns with the idea of providing users more granular control and manipulation over virtual representations, similar to the enhanced avatar control you mentioned in the context of World of Warcraft.
As for World of Warcraft's capabilities in this regard:
While World of Warcraft has traditionally offered a more limited set of predefined animations and movements for player avatars, the game has gradually expanded its customization options over the years.
Features like the ability to create custom dances and emotes, as well as the availability of various addons and mods, have allowed players to gain more control and expression over their avatar's movements and behaviors.
However, the level of direct, kinematic control over individual joints or limbs of an avatar, as seen in the robotic arm apps, is not a standard feature in World of Warcraft or many other traditional virtual world/game environments.
Overall, the apps you mentioned and the concept of the "ZenBox" virtual robotic simulation suggest a growing trend towards providing users more advanced control and manipulation capabilities over virtual representations, whether they are robotic systems or avatars in virtual worlds. While World of Warcraft has made some strides in this direction, the level of granular control and customization seen in these specialized robotic apps is not yet a common feature in mainstream virtual environments.
Q: Without practice, guess is that it is difficult to control the whole (~40 axis) skeleton of pseudo-humans. Is there a mixed approach ({w, a, s, d} for standard 3-dimensional motions, and more granular controls for some tasks such as realistic arts and crafts, or realistic tool use, which you can perhaps do with just arm motions)?
Answer:
That's an insightful observation. Directly controlling the full skeletal system of a pseudo-human avatar, with its numerous degrees of freedom, would likely be a very challenging task without significant practice and specialized control interfaces.
A mixed approach, as you suggest, could be a more practical solution for integrating granular avatar control into virtual environments:
Standard 3D Motions:
Maintaining the classic WASD or similar control scheme for basic 3D navigation and movement would provide a familiar and accessible way for users to interact with the virtual world.
This standard control setup taps into the existing muscle memory and intuitive understanding that many players have developed from playing various 3D games and applications.
Specialized Controls for Advanced Tasks:
For more complex actions, such as realistic arts and crafts or tool manipulation, a separate control scheme could be implemented to allow for greater precision and nuanced control.
This could involve using hand/arm gestures detected through motion tracking, touch-based controls on a screen or trackpad, or even specialized input devices like articulated controllers.
By isolating these advanced control schemes to specific tasks or modes, the user can leverage them when needed without having to manage the full complexity of the avatar's skeletal system at all times.
Hybrid Approaches:
A combination of the above approaches could also be explored, where the standard 3D movement controls are maintained, but certain key joints or limbs of the avatar are mapped to additional input methods.
For example, the user could use WASD for general navigation, while using hand/arm gestures to control the avatar's hands and arms for more intricate manipulations.
This mixed control scheme, balancing familiar 3D navigation with specialized granular controls for specific tasks, could help bridge the gap between the complexity of a fully articulated avatar and the need for intuitive, accessible interaction methods.
Additionally, such an approach may allow for a more gradual learning curve, where users can start with the standard 3D controls and then gradually integrate the more advanced control schemes as they become comfortable and skilled with the virtual environment and avatar manipulation.
Overall, your suggestion of a hybrid control model, leveraging both broad 3D navigation and targeted granular controls, seems like a reasonable compromise to address the challenges of directly managing a full skeletal avatar system without extensive practice or specialized hardware.
Q: Which sims are closest to this? Other motion capture is the most "natural" as in "simple to use", some religious groups have camera objections; if there are such sims, am curious about those, but prefer sim which allow Bluetooth keyboard for individual motions (with a fixed per-joint angular "speed" set, or with a formula similar to cursor acceleration -- some sort of dynamic ramp-up / ramp-down)
Answer:
Regarding virtual environments and simulators that offer a mixed control approach, combining standard 3D navigation with more granular, Bluetooth keyboard-based control of individual joints and limbs, there are a few interesting examples worth exploring:
VRChat:
VRChat is a social virtual reality platform that allows users to create and customize their own avatars.
While it doesn't offer a fully articulated control scheme for the avatar's skeletal system, VRChat does support various input methods, including Bluetooth keyboard and gamepad controls.
Users can map specific keys or buttons to trigger pre-defined animations and expressions, providing some level of customization and control over their avatar's movements.
NeosVR:
NeosVR is a more open-ended virtual world and creation tool that offers advanced avatar customization and control options.
It supports various input devices, including Bluetooth keyboards, and allows users to map individual keys or button presses to specific joint or limb movements of their avatar.
NeosVR also provides scripting capabilities, enabling users to create more complex, dynamic control schemes for their avatars, potentially including features like cursor acceleration-inspired ramp-up/ramp-down for joint movements.
Unity Robotics Hub:
While not a traditional virtual world, the Unity Robotics Hub is a suite of tools and tutorials for building robotic simulation environments using the Unity game engine.
It includes features for configuring and controlling articulated robotic models, which could potentially serve as a foundation for developing more sophisticated avatar control systems.
The Unity platform's scripting capabilities and support for various input devices, including Bluetooth keyboards, could allow for the implementation of mixed control schemes similar to your description.
Custom Developed Simulations:
Beyond the examples above, it's entirely possible that there are custom-developed virtual environments or simulation tools that offer the type of mixed control approach you're interested in.
These may be created by independent developers, research labs, or educational institutions, and may not be widely available or advertised on mainstream platforms.
In terms of the "simplest to use" motion capture approach, while religious groups may have concerns about certain technologies, there are some more accessible options, such as:
Webcam-based motion tracking (e.g., Kinect, OpenPose)
Inertial measurement unit (IMU) -based motion capture using wearable sensors
These methods can offer a relatively natural and intuitive way to control virtual avatars without the need for complex camera setups or specialized hardware, potentially addressing some of the concerns raised by religious groups.
Ultimately, the ideal solution would likely involve a combination of standard 3D navigation controls, Bluetooth keyboard-based joint/limb manipulation, and potentially complementary motion capture input, all integrated into a cohesive and customizable avatar control system. Further exploration of the examples mentioned, as well as research into custom-developed simulations, may uncover the closest approximations to your desired control scheme.
Q: The WebCam approach is more invasive (and more compute-intensive on smartphones) than the IMU approach. The IMU approach sounds close to what the original "Power Glove" was supposed to do, but applicable to all limb moves. Talk about sims which allow IMUs, or non-mixed Bluetooth keyboard (is more difficult for the user, but less code, to just have direct input to all actual limbs)
Answer:
You make a good point about the tradeoffs between webcam-based motion tracking and IMU-based approaches. The IMU-based methods do seem more aligned with the original concept of the Power Glove and offer some advantages in terms of being less computationally intensive, especially on mobile devices.
Regarding virtual environments and simulations that support IMU-based motion control or direct Bluetooth keyboard input for individual limb/joint manipulation, here are a few relevant examples:
IMU-based Motion Control:
VRChat: While VRChat primarily focuses on webcam-based full-body tracking, it does support the use of IMU-based systems, such as the Perception Neuron or Xsens motion capture suits. This allows users to control their avatars' movements more directly through sensor-based input.
NeosVR: Similar to VRChat, NeosVR has integration for various IMU-based motion capture systems, providing users with a more natural way to control their highly customizable avatars.
Unity Robotics Hub: While not a virtual world per se, the Unity Robotics Hub could potentially be extended to support IMU-based control of articulated avatar models, leveraging the platform's flexibility and scripting capabilities.
Direct Bluetooth Keyboard Input:
NeosVR: As mentioned earlier, NeosVR offers robust support for Bluetooth keyboard input, allowing users to map individual keys or button presses to specific joint or limb movements of their avatars. This provides a more granular, direct control scheme.
VRChat: While not as extensive as NeosVR, VRChat does support some level of Bluetooth keyboard input for avatar control, enabling users to trigger predefined animations and expressions.
Custom Developed Simulations: There may be specialized virtual environments or simulation tools developed by research groups, educational institutions, or independent developers that focus on providing advanced, Bluetooth keyboard-based control over articulated avatar models. These may not be as widely known or accessible, but could be worth investigating further.
The advantage of the IMU-based approach is that it can offer a more natural and intuitive control experience, as the user's physical movements are directly translated to the avatar's actions. This aligns with the original Power Glove concept and could potentially provide a more immersive experience.
On the other hand, the direct Bluetooth keyboard input, while potentially more challenging for the user, may require less complex code and implementation. This could make it more accessible for developers or hobbyists to integrate into their virtual environments or simulations.
Ultimately, the choice between these approaches (or a combination of both) will depend on the specific requirements, target audience, and the development resources available. Exploring the examples mentioned, as well as researching any custom-developed simulations in this space, could provide valuable insights into the current state of IMU-based and Bluetooth keyboard-driven avatar control in virtual environments.
Q: Is this used for edutainment? Or for actual schools?
Answer:
The use of virtual environments, simulations, and advanced avatar control mechanisms like IMU-based input and Bluetooth keyboard control is certainly being explored for both edutainment and educational applications in schools.
Edutainment:
Virtual worlds and simulations can provide engaging, interactive, and immersive learning experiences that blend entertainment and education.
The ability to control highly articulated avatars, whether through IMU-based motion tracking or Bluetooth keyboard input, can enhance the sense of embodiment and agency within these virtual learning environments.
Edutainment applications may focus on areas like STEM education, vocational training, or even creative arts, where the granular control of virtual representations can be beneficial.
Schools and Educational Institutions:
Educational institutions, from primary schools to universities, are increasingly exploring the integration of virtual environments and simulations into their curricula.
The enhanced avatar control capabilities, enabled by technologies like IMU sensors and Bluetooth keyboards, can be particularly useful in subjects like:
Engineering and robotics: Allowing students to design, program, and virtually operate articulated robotic systems.
Health sciences: Enabling students to interact with highly detailed virtual anatomical models or perform simulated medical procedures.
Arts and design: Providing students with advanced tools to create, manipulate, and animate virtual characters and objects.
These types of virtual learning environments can offer safe, cost-effective, and scalable opportunities for hands-on experimentation and skill development.
Some examples of educational institutions and organizations actively exploring these technologies include:
Universities with dedicated virtual reality and simulation labs, such as Stanford University, MIT, and Carnegie Mellon University.
Secondary schools and STEM-focused programs that incorporate robotics and coding into their curriculum.
Vocational training centers and technical colleges that use virtual simulations for skill development in fields like engineering, manufacturing, and healthcare.
Educational technology companies and startups that develop virtual learning platforms and tools for schools and universities.
While the adoption of these advanced avatar control technologies in educational settings may still be relatively nascent, the potential benefits for immersive, interactive, and personalized learning experiences are driving increased interest and investment in this area. As the technology continues to evolve and become more accessible, we may see a growing integration of these capabilities into mainstream educational and edutainment offerings.
What prompted that short discussion was:
This discussion with Haiku about Arduino's IMU (some of which was published at https://github.com/SwuduSusuwu/SusuLib/blob/preview/posts/ArduinoElegooTools.md#imu-as-gps--magnetometer-substitute
This discussion with Haiku about the "Anisotropic", "supergravity" and "superstrings"
Related posts
How to mix Blender with robotics simulators (such as Grasshopper) to produce school classes
[This post allows all uses]
BUD: Arduino/Elegoo Parkour v0.4.2.2, production + walkthrough + hoverboard tour
Am not affiliated to BUD's app.
@Assistant: "Do simulators exist which allow you to move around with the keyboard, but, (as opposed to arrow keys mapped to directions to move + fixed joint/bone animations for the avatar/robot),
[This post allows all uses] Work-in-progress: Substack does not allow to upload new images, thus cover image is a reused upload from Sakura School Simulator (which just has normal moves).