Question Recomendation for Inline Text Translation of Web Application

Cecil

19+ years progress programming and still learning.
I've spent that last few years developing and maintaining an inhouse Web Application and in the last year we have merged with a Japanese Company. Japanese uses have requested that they want to used the same application but it needs to be translated into Japanese because none of them can read English (except one).

I so what I have done is created a simple ABL function which accepts the base language English text and lookup a translation table and finds the relevant Japanese and essentially substitutes out the English for Japanese.

For the initial translation I'm using a 3rd party web services to do the translations from English into Japanese and storing the results into the database. I then have a maintenance screen which enables our Japanese users to edit the translation for grammar correction.

Because the translation function is having to be called multiple times to access the translation lookup table, I am starting to worry about potential performance issues and the user experience for the web pages to load whistle its translating English into Japanese.

From the attached two screen shots, I have the English version and the translated version. Where there is a block of text, the translation function is being called.

What would you recommend to help server-side performance improvements?

NOTE:
What I'm trying to avoid is having multiple code for each language.
 

Attachments

  • Online Reporting English.png
    Online Reporting English.png
    147.1 KB · Views: 8
  • Online Reporting Translated JPN.png
    Online Reporting Translated JPN.png
    130.3 KB · Views: 6

GregTomkins

Active Member
We have a fairly elaborate system for dealing with English vs French, but it boils down to this:

1. Everywhere a translatable string is to be shown, we use a translation function, eg. $('#my-id').html(getTranslation('My House'))
2. This function is purely a client-side operation so it's super-fast.
3. It relies on a JS object that maps English terms to French, eg. var translations = {"My House": "Mi Casa", "Your House": "Su Casa"}.
4. This object is created server-side when we deploy code by parsing JS looking for getTranslation() calls and looking them up in our ABL translation DB.
5. It is handled as a .json file that gets downloaded when the user logs in.
6. This parsing etc. is probably a pretty expensive operation, but it only happens when we deploy new code.
7. There is also the cost of moving this giant translation object across the network, but in the big scheme of things, it is small, a few hundred K.

For us the major issue was to leverage our existing ABL translation DB, in which we have invested thousands of hours. Of course, as with you, creating duplicate code or making $.ajax calls every time we needed to translate something were non-starters.

I am not familiar with the issues that would be created by using a non-A-Z alphabet.

HTH
 

Cecil

19+ years progress programming and still learning.
Hey thanks for that. It's given me food for thought.

The .json file which gets downloaded to the browser, is that system generated each time the user logs in or is it generated once and stored on your web servers as a static file?

I do like the idea of the browser doing all the work in substituting out the text for the translated version. The only issue of course is the user experience due to the power of their computer and which browser they're using.

From a development point, what controls do you have to make sure new English words have been entered into the translation database. Basically how do you make sure that all the English words have been translated?

Also on a side note are you using GZIP file compression of the .json file via the WebServer?
 
Last edited:

GregTomkins

Active Member
1. We only generate it when we deploy new code. It's part of our deployment process, and it could take a while since it's parsing hundreds of JS files using code that isn't particularly efficient.

2. There isn't really any systematic control. We have to check for alignment/spacing issues anyway, which is a manual, visual task, and far from perfect. That said, probably 90% of the words we need are already translated, so missing words is a lesser concern than words that are translated, but look wrong (eg. because "US" translates to "NOSOTROS" and needs 4x more space).

3. We don't do any compression. The time factor to download the .json is trivial compared to the rest of the app, not to mention the effect of ongoing queries and updates.

Again, our major concern was leveraging pre-existing translations. If in your case those don't exist, I'd consider just writing the .json by hand, and bypassing all the complexities of parsing and so forth. However, you'd still need to contend with wrapping everything in getTranslation() calls (or something similar; different frameworks do this in different ways, but it's probably safe to say all of them require some way to identify a translate string distinct from a URL or other non-translatable terms).

Cheers
 
Top