top of page
Search

Language engineering vs. standard user interface

  • Writer: Ivaylo Fiziev
    Ivaylo Fiziev
  • 2 days ago
  • 6 min read
ree

Lately we are working on the topic of material handling. One of our goals was to generate a pick and place robotic operation based on user input. The user is supposed to select the object to be picked, the frame to be used when picking it and the pose of the tool when doing the pick. Then the user selects a frame representing the place location and the pose of the tool when doing the place. In addition the user can specify start/end robot home positions (poses) and approach/depart positions relative to the selected frames. To fulfill the requirements we ended up with a fairly complex user interface (this was not the only requirement). It took nearly two versions to develop and still it may change. For us this means additional effort for updating the user interface and the logic behind it. Still it represents a single fixed use case that should be taken as is. This means that users cannot easily change the behavior if needed. The logic is hardcoded. The effort for supporting such a solution is big. The code that we wrote is not trivial since we need to describe all this in a language neutral way using the model for operations in Process Simulate. Then this model is used to generate the actual syntax of the target robotic language. Usually this process involves a lot of workarounds. We fight with generating the operation, editing it, undoing latest changes, follow mode etc. When things get complex our code becomes a mess.

Isn't there another way to do the same but this time make it both easy to implement and flexible for the user? This is what language engineering is all about. In particular domain specific languages like the robotic languages. The best possible way to implement tasks like this is by using the robotic language itself. It is designed for the purpose. It is a DSL! It is supposed to make your life easy. By expressing your intention via a programming language you have the freedom to modify your logic the way you need it to be. Instead of working with a complex (often confusing) user interfaces users will get a simple code snippet. In this case a pick and place snippet. The snippet carries the logic for moving between the home, approach, pick, place, depart and home locations. What is left for the user is to provide the actual locations. For this you need some help from the code editor. Provided that the editor allows it, you can pick the locations by using the standard frame picker control. And here is the corner stone in this approach - we really need quality code editor. It can help you edit the code. With context help you don't need to be an expert in robotic languages.


ree

What is the effort for developing/modifying a snippet? - copy/paste a few lines of code. 5-10 minutes, a day maybe? Compare this to the six moths we've spent on the user interface that is still not ready.

Each user can benefit from custom code snippets. They won't wait for us to provide the user interface. Instead the code editor will be the ultimate user interface. Process Simulate will become much simpler to use. No redundant UIs for editing the code (Comment Out is my favorite). No redundant complexity. Users will simply express themselves with code.

On our side the development time will be greatly reduced leaving space for new, innovative projects.

Simulation will be closer to reality since what you type is what you get. The code will be the only source of truth. It will work much like a VRC.

That is all good but ...

What are the downsides of language engineering?


  1. You need to learn the syntax. This can be a problem for inexperienced users. Context help is essential here. For experienced users this should really be welcome.

  2. Changing the syntax is not easy. If you need to change the robot vendor/model, you'll have a serious problem. Translating from one language to another brings the need of a transpiler (https://en.wikipedia.org/wiki/Source-to-source_compiler). Transpilers are hard to do and we'll need many of them. This is a real showstopper!

  3. PS specific OLP commands (# Grip, # Release, # Drive etc.) are not part of any robotic language. You won't be able to work with these since they are artificially introduced. You'll have to express the same with the robotic language instead.

  4. Language interpreters are really complex. You can literally spend years building one of these.


So what is the solution?


It is a strange game. From one side Process Simulate seems to widely rely on robotic languages but from another it has nothing to do with them. It uses the operation object model instead. At the same time it works on the language syntax to handle logic. This duality confuses the senses. It is a weird mix of technologies that should not coexist. This is also the reason for not being able to edit freely the robotic code. You are actually editing the model of operations. The code is just for show.


Here are some thoughts off the top of my head:


The programming language is the ultimate tool for running simulations. This is what the real robot executes as well. The operation model is an abstract model that tries to mimic the motion statements. It is not as complex as the state machine built into the language syntax but can also run simulations. I would expect one or the other but not both. It could be that the use case for changing robot vendors has lead to the current design but ... It would be more logical to have the operation model generate the code (like it does today) but the simulation to rely entirely on the language. e.g. not using the operations for simulation. This is a nice middle ground I think. Then we could have the best of both worlds. Users will be able to work with the code and when changing vendors the code will still be auto generated. However for this to happen we should throw away all the controllers that we currently have and implement them as fully functional language interpreters. Obviously this won't happen anytime soon.

Even with this approach, keeping the operation model and the robotic code in sync is a challenge. Any change in the user interface should reflect in the code without loosing the changes did in the code. The opposite is also true. This is why I say both should not coexist on the first place. It should be only one or the other. For example: Visual Components does not work with the robotic code at all. Only generate it based on its object model for robotic instructions. Simulation is always based on the object model. e.g. this duality simply does not exist!

Process Simulate seems to do the same internally but shows the actual code to the user. This leaves the user with the impression that the code actually runs the simulation. In fact it only partially does. When you work with code the expectation is that language engineering rules apply. Code editing, code completion, code validation, context based help - the standard tools of a quality code editor. Instead you get a ton of dialog boxes, each editing a specific statement. This is frustrating for anyone with programming background! Do robotic engineers actually like it? Maybe because it looks similar to the teach pendant terminals on a real robot? Could it be that we have so different mindsets?

Imagine if Visual Studio or any other IDE worked the same way ... would you use such an IDE? - "Excuse me but I am looking for the menu that comments out my for loop ... Uugghh"


To me mixing the robotic language syntax with a custom syntax is obscure. If you do this it is no longer the same language! You are changing the syntax! What if there are collisions with a specific robotic language statement or expression? For example the hash character in SCL dereferences a variable so '# Grip' should refer to a variable named Grip ?!? If your indent is to activate the gripper why not just send a signal to it. It will catch it and do the gripping. This is what Practical use case: Pick and place using a SCL gripper is all about. What happens with these custom commands when you download the code to the real robot? They are just gone? How do you make sure the gripper really works then?


I have some other questions but ...


I am puzzled by the strangeness I find here. Based on what I know about language engineering I am having problems finding the rational explanation of how this works.

To me it looks like it was a wrong decision to have both the object model and the language drive the simulation together. I would vote for the language only approach ... after all the language is what drives the real robot.


Language engineering is a complex area. It has always been essential part of computer science. It requires a lot of expertise. A lot of investment. Somehow I was under the impression that Process Simulate had all this investment already done. As for my work on SCL I always stick to the basic principles of language engineering. This is the way to make this part of the product look up to the standards.


For those that are interested in language engineering, here is a really nice introduction: https://www.oreilly.com/library/view/build-your-own/9781804618028/

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Subscribe Form

Thanks for submitting!

bottom of page