3 min read

Automating the creation of video documentation

Automating the creation of video documentation

Several hundred people attended our webinar last week and there were many excellent questions asked by the audience. We thought it would be helpful to share some of the questions with you since we didn’t get to answer all of them during the live session. The full webinar recording is here if you missed it (you have to register with BrightTALK).

The level of engagement was amazing and the number of meetings requested following the event is unbelievable. Once again this re-confirms that the problem of keeping software videos up to date is painful, time-consuming, and expensive and that product documentation, customer education, client success, and marketing organizations are looking for a solution.

Questions from the webinar

How do you edit the narration and movement of the mouse to make corrections?


Typically you will upload a document into the Videate application and it will be transformed into what we call a Spiel, which is our language including the text (narration) and the interaction instructions (movement of the mouse, filling forms, and annotations). You can then preview the video in the Videate application.

If things are not correct, you will see where the issues are in the Spiel and you can correct your source file and upload again. We also provide an editor in the Videate application that allows you to edit the spiel directly.

Voice-overs take different times. How do you link the video speed to the voice-over speed?

Videate speaks the text and moves the mouse just like you would if you were using a screen recording tool. With Videate it’s all done through automation in the cloud and is played out in real-time. With each paragraph, there may be embedded interactions, and it doesn’t continue to the next paragraph until all of the interactions are complete. By design, this creates a natural “gating” effect and therefore there is nothing to synchronize. The video rendering engine plays it in real-time already synchronized.

Do you have to use a DITA document to create these videos?

No, DITA is only one option. We support Google docs, Word, XML, and AsciiDoc. DITA is a widely used option because it has built-in semantics that accelerates the learning process for our engine. As long documents are written in a consistent pattern (DITA is a very structured methodology but other formats work as well) we are in good shape.

Does the content provided to be fed into the engine have to be strictly procedural?

Can overview information be included as intro material?

The content does not need to be strictly procedural. Overview information can be included as well. A popular model for video scripts is “Narration, Click Path, and Special Effects,” and you can include Introductions and Summaries as required.

Do you provide accessibility options for your videos, closed captions and a second audio description track?

Accessibility requirements for certain companies require that all synchronized audio/video electronic documentation provide native closed captions and audio descriptions.

We provide native closed captions as part of the video generation.

If the videos are all produced with AI (no human interaction), can you also provide an additional/secondary audio description track when the video is generated?

Audio description of video provides information about actions, characters, scene changes, on-screen text, and other visual content.  Audio description supplements the regular audio track of a program.  Audio description is usually added during existing pauses in dialogue.

We can provide a second audio track as well, although it is not the default.

Does the software work on the translated documents, or the translation of the video script is a separate task and is done later and then converted with text-to-speech?

We work up the video from documents and a user interface in English first.  Then we run the document through translation, bring along the interactions which are performed on English elements in the UI, and then intermingle the actions into the translated result.  We can integrate with Translation Management Systems which can ensure the translation phase is accurate, or we can run them through Google Translate for a baseline translation, and a native language speaker can vet and make any “last mile” corrections.

DITA gets a lot of mention as a specific XML format. Does DocBook offer the same “already semantically marked-up" advantage?

Yes, DocBook will work as well.

Have you ever used your process on AsciiDoc?

Yes, AsciiDoc can be supported as well.

If you’d like to see a demo with some more real-world examples or just ask more questions please contact us.

websights