Tag: XLIFF

Advertisement

XLIFF 2.1 open for second public review

Language Industry News and Events

XLIFF Version 2.1 has reached an important milestone in its development. On February 7, the OASIS XLIFF TC members approved the second Committee Specification Draft and sent it out for second public review. The OASIS Administration made the call for public comments on February 9, 2017. The second public review period will end on February 24, 2017.

The public review draft was presented at XML Prague last week. You can read on details of the ITS implementation in XML Prague Proceedings.

The first “dot” release after XLIFF 2.0 delivers on the modularity promise of the XLIFF 2 architecture. XLIFF 2.1 defines three new namespaces and brings a full native ITS 2.0 capability via its ITS Module without breaking the backwards compatibility with XLIFF 2.0.

OASIS logoXLIFF 2 Core and 7 out of 8 XLIFF 2.0 Modules are unaffected by the 2.1 release. Apart from a major bugfix for the Change Tracking Module and the brand new ITS module, XLIFF 2.1 brings Advanced Validation capability. XLIFF 2.1 (and XLIFF 2.0 also) can be now 100% validated with standardized validation artifacts without regressing to custom validation code. The expressivity of the validation framework was greatly enhanced by the usage of Schematron and NVDL schema languages on top of XML Schemas (xsd) that were available in XLIFF 2.0.

This second public review draft implemented all feedback received during the 1st public review we were informing on in November last year. You can view the resolutions for each of the 20 issues opened during the first review here.

All comments from the wider community (those who are not members of the XLIFF TC) are collected through the XLIFF TC’s publicly archived comment list.

When posting a comment, please include the string “XLIFF 2.1 csprd02” in the subject line. You may want to number your comments if you’re sending a few of them, and the subject line should give an idea on what your comment is about.

Collected comments and the progress of their disposition are public and can be followed on the XLIFF TC JIRA project.

The XLIFF TC plans to have satisfactory dispositions for all comments by March 7,  2017, and approve the Committee Specification by March 21, 2017, if this round of the public review does not necessitate material changes to the specification.

For the progression of the standard from the Committee Specification stage to the Candidate OASIS Standard stage, early adopters within the TC and outside of the TC need to demonstrate implementability of the new standard by making public Statements of Use and posting those to the TC. Write to the TC comment list if you are interested in an early implementation and need advice. XLIFF TC will launch the questionnaire to collect the public Statements of Use in late February/early March 2017. We expect the OASIS wide standards approval ballot to take place during the summer of 2017.

Tags:,
+ posts

David Filip is Chair (Convener) of OASIS XLIFF OMOS TC; Secretary, Editor and Liaison Officer of OASIS XLIFF TC; a former Co-Chair and Editor for the W3C ITS 2.0 Recommendation; and co-moderator of the Interoperability and Standards WG at JIAMCATT. He has been also appointed as NSAI expert to ISO TC37 SC3 and SC5, ISO/IEC JTC1 WG9, WG10 and SC38. His specialties include open standards and process metadata, workflow and meta-workflow automation. David works as a Research Fellow at the ADAPT Research Centre, Trinity College Dublin, Ireland.

Advertisement

Related News:

Advertisement
SDL Tados 2021

XLIFF 2.1 open for public review

Language in the News

XLIFF Version 2.1 has reached an important milestone in its development. On October 14, the OASIS XLIFF TC members approved the first Committee Specification Draft and sent it immediately for the first public review. The OASIS Administration made the call for public comments on October 26, 2016. The first public review period will end on November 25, 2016.

The public review draft was extensively presented and discussed at the FEISGILTT workshop at LocWorld32 in Montreal last week.

The first “dot” release after XLIFF 2.0 delivers on the modularity promise of the XLIFF 2 architecture. XLIFF 2.1 defines two new namespaces and brings a full native ITS 2.0 capability via its ITS Module without breaking the backwards compatibility with XLIFF 2.0.

OASIS logoXLIFF 2 Core and 7 out of 8 XLIFF 2.0 Modules are unaffected by the 2.1 release. Apart from a major bugfix for the Change Tracking Module and the brand new ITS module, XLIFF 2.1 brings Advanced Validation capability. XLIFF 2.1 (and XLIFF 2.0 also) can be now 100% validated with standardized validation artifacts without regress to custom validation code. The expressivity of the validation framework was greatly enhanced by the usage of Schematron and NVDL schema languages on top of XML Schemas (xsd) that were available in XLIFF 2.0.

All comments from the wider community (those who are not members of the XLIFF TC) are collected through the XLIFF TC’s publicly archived comment list.

When posting a comment, please include the string “XLIFF 2.1 csprd01” in the subject line. You may want to number your comments if you’re sending a few of them, and the subject line should give an idea on what your comment is about.

Collected comments and the progress of their disposition are public and can be followed on the XLIFF TC JIRA project.

The XLIFF TC plans to have satisfactory dispositions for all comments by the end of November 2016, and approve the second public review draft by December 6, 2016.

For the progression of the standard from the Committee Specification stage to the Candidate OASIS Standard stage, early adopters within the TC and outside of the TC need to demonstrate implementability of the new standard by making public Statements of Use and posting those to the TC. Write to the TC comment list if you are interested in an early implementation and need advice.

Tags:,
+ posts

David Filip is Chair (Convener) of OASIS XLIFF OMOS TC; Secretary, Editor and Liaison Officer of OASIS XLIFF TC; a former Co-Chair and Editor for the W3C ITS 2.0 Recommendation; and co-moderator of the Interoperability and Standards WG at JIAMCATT. He has been also appointed as NSAI expert to ISO TC37 SC3 and SC5, ISO/IEC JTC1 WG9, WG10 and SC38. His specialties include open standards and process metadata, workflow and meta-workflow automation. David works as a Research Fellow at the ADAPT Research Centre, Trinity College Dublin, Ireland.

Related News:

How to Use the HTML5 Translate Attribute: A Translatability Best Practice

Personalization and Design, Translation Technology

HTML5 introduces a translate attribute that allows fine-grained control over what content should be translated, or not. Richard Ishida of the W3C has all the details of the attribute and its applicability, as well as some interesting insights into how Bing Translator and Google Translate deal with the translatability of content issue.

Here’s an example of the translate attribute’s use, taken from Richard’s blog (the HTML5 spec’s global attributes section has another other nice example, see the Bee Game.):

<p>Click the Resume button on the Status Display or the
<span translate="no">CONTINUE</span> button
on the printer panel.</p>

See how the word CONTINUE is made non-translatable using the translate attribute’s value set to “no”? Blimey! However, there are times when CONTINUE might need to be translated. So, flip that puppy to “yes”.

This HTML5 attribute is a very welcome addition to the content creation and translation tools world, sure.  But, it is very welcome for other reasons too.

This is a time of new interactions and emerging platforms that challenge the established desktop and website norms of what should be translated or not. Mobile, augmented reality, gamification, and other trends, all challenge established norms of content rules. So too, is it a time when companies redefine themselves, cross over, and promote their own design guidance as a differentiator in the market. Oracle, for example, likes to say “Software, Hardware. Complete” so content needs to cross-reference many deliverables. SAP, as another example, recently launched an app in the consumer space (available in German and English) that may require a different style of content and translation from the enterprise applications space. Android has released user experience (UX)  guidance of its own, and so on.

I previously raised such translatability issues in my Don’t Translate: Won’t Translate blog post.  I chipped into the [Bug 12417] discussion about the attribute’s development, too.

Using content to convey a translation instruction, by making a piece of text all uppercase for example, is not a best practice. It is a UX failure, makes personalization and customization difficult, and assumes the consumer of the content is a second-class stakeholder. Frankly, it is also very dangerous. Can you imagine if software developers used text that way in their code, rather than relying on the program logic?

As for the time-honored method of writing a translation note, or description, telling a translator that some content should not be translated, or should be, well such approaches just ain’t reliable or scalable, are they?

Now, there is a clear best practice to follow (and adapt for other formats). The HTML5 translate attribute educates content developers that the best practice for indicating whether content should be translated or not is through the use of markup (or metadata), and not through how the content is written. Translation tools should update to the HTML5 spec requirements and process this attribute asap.

Tags:, , , , , , , , , , , ,
+ posts

Ultan Ó Broin (@localization), is an independent UX consultant. With three decades of UX and L10n experience and outreach, he specializes in helping people ensure their global digital transformation makes sense culturally and also reflects how users behave locally.

Any views expressed are his own. Especially the ones you agree with.

Related News:

Standards, Interoperability, Popcorn

Language Industry News and Events, Translation Technology

These standards initiative thingies are like buses. You wait for ages and then two of them come along together.

Following the er, demise of LISA (the LOCALIZATION Industry Standards Association), we have just seen an announcement by the Translation Automation User Society (TAUS) calling for community guidance on a proposal for that body to become an interoperability watchdog for the industry. This was followed shortly afterwards by an announcement by the Globalization and Localization Association (GALA) that they will fund a standards initiative for the entire industry. Of course, the TAUS and GALA positions are not mutually exclusive and I think they complement each other. I’ll get that popcorn…

The interoperability issue for example, costs the industry a fortune (to the tune of millions of dollars for some). The following presentation called XLIFF: Theory and Reality from Micah Bly of Medtronic, delivered at last year’ XLIFF Symposium in Ireland, has some great examples of the issues involved (hat tip: @ctatwork).

Bottom Line: Interoperability Saves Consumers Money

And sure, who wants to admit to using their own, er, flavor of XLIFF, or using it in some special way (the next time you hear somebody talking about XLIFF just throw in the phrase ‘inline markup’ and see the reaction). It’s always somebody else breaking the standard or not meeting yours isn’t it?

Standards in file formats and tool ‘neutrality’ are notoriously difficult areas to negotiate, and the L10n industry isn’t unique in facing the challenge. The debate generates a lot of thought for sure. Personally, I think that given the costs involved, is it localization service buyers who will call the shots in driving the standards debate. On the other hand, maybe an organization outside the industry might be a better place to look for compliance.

One thing that I (given my role) am interested in understanding is why so many people feel the need to write proprietary extensions to seemingly open standards or to go about implementations in a quirky way. I think there is a link between interoperability issues and some pretty dismal information quality processes, an obsession with formatting over structure, and failure to automate at the source level too (if I see one more workaround to manually create context for translators—instead of deriving it—automatically I will go nuts). We need to be able to figure it out across the entire information lifecycle. For example, in the ERP space, only 23% of companies stick with the vanilla flavor of the application (i.e., what they get out of the box). The rest go off and customize (and that means translating it).

It’s very interesting debate to watch on Twitter (try the #galalisb hashtag while it lasts).

Your views? Find the comments…

Tags:, , , , , ,
+ posts

Ultan Ó Broin (@localization), is an independent UX consultant. With three decades of UX and L10n experience and outreach, he specializes in helping people ensure their global digital transformation makes sense culturally and also reflects how users behave locally.

Any views expressed are his own. Especially the ones you agree with.

Related News:

Don't Translate, Won't Translate

Translation Technology

How do you indicate non-translatable parts of a translatable string? I’m interested in your best practises and advice.

Let’s assume the file format concerned is XML, and XLIFF at that. And let’s also assume these source strings are in English and must be optimized for as much translation automation as possible (MT, TM, other string leveraging techniques, and so on).

I would have thought the obvious solution was to do something like use the mrk element in the source segment. For example:

<source>Please translator, do not translate the word <mrk mtype=”protected”>Groovy</mrk>. The word <mrk mtype=”protected”>Groovy</mrk> refers to an agile dynamic language for the <mrk mtype=”protected”>Java</mrk> platform and should remain in English.</source>

How would you do it given the assumptions above?

Tags:, , ,
+ posts

Ultan Ó Broin (@localization), is an independent UX consultant. With three decades of UX and L10n experience and outreach, he specializes in helping people ensure their global digital transformation makes sense culturally and also reflects how users behave locally.

Any views expressed are his own. Especially the ones you agree with.

Related News: