Mars InSight Landing Live!

Vandaag werk ik door aan een vertaling Engels-Nederlands van de handleiding van een papiersnijmachine, maar ik hoop vanavond wel tijd te vinden om de landing mee te maken.

De landing zou plaatsvinden rond 3 p.m. EST (East Standard Time). Dat is West-Europese Tijd min 5 uur. 3 p.m. is 15 uur. In België en Nederland zou het dan 20 uur zijn. (zulke berekeningen zullen gemakkelijker worden als de winterijd/zomertijd is afgeschaft, want niet elk land voerde die op hetzelfde moment in).

Klik op de tekst “LIVE Nasa InSight Mars Landing” linksboven in de video om de video weer te geven.

InSight op Mars

skynews-mars-insight_4371284

Al in augustus 2016 konden we melden dat we “ingescheept” waren voor Mars.

In het kader van hun educatieve opdracht heeft NASA een variant bedacht op de Frequent Flyer Miles van luchtvaartmaatschappijen: je kunt Frequent Flyer worden van NASA-vluchten.

Wie zich voor een vlucht inschrijft, krijgt een fraai vliegticket, verzamelt Frequent Flyer Miles, en je naam wordt in een chip gezet die met het toestel wordt meegestuurd. In het geval van InSight belandt die chip op Mars.

INsight-inscheepdocument

Een aardig ideetje waar we aan meededen.

De lancering werd uitgesteld, maar op 5 mei 2018 was het eindelijk zover: InSight vertrok vanaf Vandenberg Air Force Base (VS) met een Atlas V 401-raket.

Op maandag 26 november 2018 beginnen de pogingen om op Mars te landen.

Er zijn in het verleden heel wat landingen op Mars mislukt. Het is dan ook niet gemakkelijk om iets neer te zetten op een planeet die miljoenen kilometers verder ligt, zodat alles automatisch moet gebeuren.

De afwezigheid van een echte atmosfeer zorgt er ook voor dat de landing volledig anders verloopt dan op Aarde, en hier dus niet kan worden getest. Bovendien is het zelden mogelijk om “eens te proberen”: er is geen tweede kans. Zelfs in 2004 gingen daardoor nog een vlucht verloren.

InSight moet meer gegevens opleveren over het binnenste van de planeet. Mercurius, Venus, Aarde en Mars zijn de enige aardachtige planeten van ons zonnestelsel, en als we iets willen weten over de evolutie van onze planeet zijn dat de beste onderzoeksobjecten. Er zijn aardachtige planeten rond andere sterren gevonden, maar die liggen veel te ver weg om er veel over te weten te komen.

Mars fascineert omdat het niet onmogelijk is dat er in het verleden leven was. Het huidige Mars is kurkdroog, heeft nauwelijks een atmosfeer, en het magnetische veld ziet eruit als de kalende kop van iemand die chemotherapie krijgt. Er is ook geen noemenswaardige maan. Phobos en Deimos hebben meer weg van ingevangen planetoïden.

De afwezigheid van een degelijke maan zou een grotere rol kunnen spelen in de ontwikkeling van Mars dan we denken. Er is al geopperd dat de getijdewerking van de Maan op de Aardse zeeën de kusten overspoelde met de kiemen van het leven, zodat het gemakkelijker aan land kon komen. Het belangrijkse mogelijke effect is dat het Aarde-Maan-stelsel zichzelf stabiliseert, zodat de Aard-rotatie behouden blijft

Op minder dan vier dagen van het perihelium is de omloopsnelheid van Mercurius in zijn baan om de zon hoger dan zijn rotatiesnelheid om de eigen as.

Venus voltooit elke 224,65 dagen een omloop om de zon. De planeet draait in 243 aardse dagen om haar as. Van alle planeten in het zonnestelsel is dit de traagste rotatie. Een siderische dag op Venus is zelfs langer dan een Venusjaar, maar vanwege de beweging van de planeet om de zon duurt een dag op het oppervlak (een synodische dag, de periode tussen twee zonsopkomsten) aanzienlijk korter: 116,75 aardse dagen.[3] Daarmee duurt de synodische dag op Venus korter dan die op Mercurius.

Venus noch Mercurius hebben manen, en de massa van de Martiaanse manen is verwaarloosbaar in verhouding tot de massa van Mars, zodat de invloed van die manen op Mars onbelangrijk is.

De diameter van de Maan bedraagt ongeveer een kwart van die van de Aarde. Er bestaat in het Zonnestelsel geen andere planeet met een naar verhouding zo grote satelliet. De getijdenwerking van de Maan stabiliseert de stand van de aardas.[7] Sommige geleerden denken dat de aardas zonder deze stabiliserende werking van de Maan bloot zou staan aan chaotische veranderingen, die het aardse klimaat veel veranderlijker en extremer zouden maken. Als de aardas zich in het baanvlak van de Aarde bevond, zoals tegenwoordig het geval is bij de planeet Uranus, dan zou complex leven waarschijnlijk onmogelijk zijn vanwege de extreme verschillen tussen de seizoenen.[8]

De Aarde blijft dus iets aparts. Het is ook niet uit te sluiten dat de Maan door getijdewerking het binnenste van de Aarde warm houdt, wat het sterke magnetische veld kan verklaren. Dat veld ontstaat immers door het stromende, ijzerhoudende en dus elektrisch geleidende binnenste van de Aarde, en een elektrische stroom wekt een magnetisch veld op.

Om de precieze invloed van de Maan op het binnenste van de Aarde te kennen, moet er worden vergeleken met een gelijkaardige planeet zonder maan, en een van de weinige kandidaten is Mars.

InSight zal Mars op allerlei manieren doorprikken en afsnuffelen om meer over de kenmerken te ontdekken. Het doet dat door een lander en met twee satellieten in een baan. Die satellieten dienen echter vooral als communicatiesysteem.

Voor wie hoopt ooit met Elon Musk mee te reizen naar Mars om de Rode Planeet te koloniseren, hebben we slecht nieuws: zoals Neil Degrasse Tyson opmerkte is terraformeren van Mars veel moeilijker dan gezond houden en beschermen van de Aarde, waardoor hij ook de ideeën van zijn collega Stephen Hawking over een multiplanetair menselijk ras naar de prullenbak verwijst. Er zijn niet alleen aandeelhouders van Tesla die zich beginnen af te vragen of Musk zijn ideeën krijgt als hij nuchter is, of als hij marihuana rookt.

 

Today on IATE

Today on the site Iate: InterActive Terminology for Europe, there is a banner at the bottom which reads:

The IATE partners are glad to announce the opening of the new version of IATE. The current version will be replaced with a fully revamped version the week of 12 November. We take the opportunity to thank you for your continuous use of IATE and look forward to better serving your needs with the new version.

Iate is a renowned European website for the translation of terminology in various fields to all official European languages.
Although it’s database is not complete, it is often used to provide official translations which are valid for the whole of the European Union.

 

Switch to https://iate.europa.eu/home for more information.

Will AI make translation an obsolete craft?

104786213-gettyimages-675938062-530x298

Sometimes people use Google Translate to understand a website. And some people think something exists like computer programs which spew out translations without any hassle. It makes look translators as old fashioned craftsman who at best have a workshop in a tourist center or during an arts & crafts exhibition.

 

Does that image suit reality?

 

As Artificial Intelligence is on the rise, some people proclaim the death of the translator in ten or maybe even five years time.

 

But as a matter of fact, Artificial Intelligence is not something which will pop up all at a sudden. It has been influencing daily practices since the nineties, maybe even earlier.

Research into artificial translation or machine translation started as early as 1949, but as is often the case with IT, the name promises more than it delivers. Early applications did nothing more than automatically looking up words in an automated dictionary.

 

Some historians claim that the idea of machine translation may be traced back to the 17th century, when in 1629 René Decartes proposed a universal language, which would share one symbol in different tongues for equivalent ideas. But the actual field of “machine translation” appeared only for the first time in ‘Memorandum on Translation’ by Warren Weaver in 1949. Research started in 1951 at MIT by Yehosha Bar-Hillel. And in 1954 there was a surprising demonstation at Georgetown University when the Machine Translation research team showed off its Georgetown-IBM experiment system in 1954. As computers’ power increased, so did the results of artificial translation. But real progress was rather slow, and after the ALPAC Report of 1966 found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced. However, in 1972 a report by the Director of Defense Research and Engineering (DDR&E) reestablished the feasibility of large-scale MT because of the success of the Logos MT system in translating military manuals into Vietnamese during that conflict. And so again, war made progress (that is ironic).

 

So, considering the early starting date of the research at about 1949, probably induced by the advent of computers during the Second World War, progress was actually very slow. The problem is whether the computer program can actually understand human language, and whether that understanding is necessary to be able to translate.

 

Some would argue “yes”, and they try to find the rules which govern human language. Interesting in that respect was transformational-generative grammar or TGG. It’s philosophy is that human beings have a set of rules in their heads which forms meaning into meaningful sentences. So an English speaker would have a rule which puts the verb immediately following the subject, whereas a Japanese speaker would have a rule putting the verb at the end of the sentence.

 

Fact is, however, that you still have to be able to make the computer program to be able to grasp the meaning of what it has to say. But it is not the computer translation program building up the message to be translated. The message is already given in the source text.

To a certain degree, that simplifies matters: the program only has to be able to transform a message from a source text into a target text, in which source and target contain the same content, but encoded in different ways.

 

That’s, of course, an idea which appeals to programmers. You take a source, use TGG to derive it’s inner structure or deep structure, and use TGG of another language to build up a new surface structure. As simple as that.

 

It seems to be the most intelligent way to deal with artificial translation, but linguistics themselves are not always sure about the rules which one should put into TGG. And, anyway, TGG is meant to go from deep structure to surface structure, not the other way around. So, that leaves us with the problem of the analysis of the source text. All TGG rules have to be “reversed” or “inversed”.

 

Although there are a lot of other ways to deal with automatic translation, not all of them could be implied from the very beginning. The advantage of a TGG based translation system was the promise of using rules in a way a human being processes language – or is thought to process language – thereby limiting the amount of memory. Rules, as in maths, provide a way to apply knowledge without a big knowledgebase. Compare having to learn al multiplications starting with the table of 1 till the table of 10, or only having to know the rule that you add up a number as many times as you want to multiply it.

 

Most machine translation systems try to apply rules, but not all do to the same degree. As a matter of fact, the terms ‘machine translation’, ‘automatic translation’, ‘artificial translation’ and so on, are not interchangeable.

 

The main rule-based machine translation (RBMT) paradigms are further classified in three types: transfer-based machine translation, interlingual machine translation and dictionary-based machine translation paradigms.

 

RBMT involves more information about the linguistics of the source and target languages. The basic approach uses a parser for the structure of the source sentence and an analyzer for the source language, and then applies a generator on that information to generate the target sentence, with a transfer lexicon for the translation of the words.

 

However, RBMT demands that everything is be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Adapting to new domains in itself is not that hard, as the core grammar is the same across domains, and the domain-specific adjustment is limited to lexical selection adjustment. But, of course, that’s all from a theoretical point of view.

 

Another way is transfer-based machine translation. It creates a translation from an intermediate representation that simulates the meaning of the original sentence. Unlike interlingual MT, it depends partially on the language pair involved in the translation.

The third method, interlingual machine translation, is a kind of rule-based machine-translation. The source language is transformed into an interlingual language. That is a ‘language neutral’ representation that is independent of any language. The target language is then generated out of the interlingua. One of the major advantages of this system is that the interlingua becomes more valuable as the number of target languages it can be turned into increases. However, the only interlingual machine translation system that has been made operational at the commercial level is the KANT system (Nyberg and Mitamura, 1992), which is designed to translate Caterpillar Technical English (CTE) into other languages.

Using Caterpillar texts had the advantage of having an enormous load of already translated texts, and the fact that CTE is rather limited in scope: it only has to deal with technical language for heavy mobile equipment. Using it to translate other subject matters, would be disastrous.

 

The dictionary-based system uses a method based on dictionary entries, which means that the words will be translated as they are by a dictionary. This will make clear, of course, that a pure dictionary-based system can only give word-for-word translations, and therefore rather mediocre results – to put it mildly.

 

The statistical machine translation (SMT) uses bilingual text corpora. Where such corpora are available, good results can be achieved translating similar texts, but such corpora are still rare for many language pairs. Google switched to a statistical translation method in October 2007. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system, and the translation accuracy improved. Google Translate and similar statistical translation programs work by detecting patterns in hundreds of millions of documents that have previously been translated by humans and making intelligent guesses based on the findings. Generally, the more human-translated documents available in a given language, the more likely it is that the translation will be of good quality. However, it turned out this is not always the case, rather to the surprise of Google. Newer approaches into Statistical Machine translation use minimal corpus size and instead focus on derivation of syntactic structure through pattern recognition, which puts higher stress on artificial intelligence. SMT’s biggest downfall includes it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors. Which explains why Google was disappointed. Not to mention that a typical United Nations document deals with a limited set of subjects.

 

Example-based machine translation is based on the idea of analogy. The corpus also contains texts that have already been translated. Given a sentence that is to be translated, sentences from this corpus are selected that contain similar sub-sentential components. The similar sentences are then used to translate the sub-sentential components of the original sentence into the target language, and these phrases are put together to form a complete translation.

Hybrid machine translation (HMT) leverages the strengths of statistical and rule-based translation methodologies. Several MT organizations claim a hybrid approach that uses both rules and statistics.

 

And finally a deep learning based approach is neural machine translation.

But all these methods are in some or other way hampered by several problems: ambiguity in texts, non-standard speech, names from people, places, organizations and so on, and the continuous changes in language: what’s standard today, might be substandard tomorrow, and vice-versa.

 

In reality all systems are in some way hybrid systems, because the output of the computer program always has to be checked by a human translator. Example-based machine translation is actually the most successful form of machine translation, because the computer program uses a big memory of previous translations to come up with suggestions, which the translator has to judge, change if necessary, and validate.

 

As mentioned above, forms of machine translations have a long history, and the development was slow and hampered by characteristics of human language (e.g. it’s well-know lack of sustained logic), and by technological problems, like processing speed and memory size.

The main reason computer translations seem to be on the up, is that processing speed and memory size are gradually less of a problem. It also means that the influx of all forms of automation have never given a big boom to artificial translation.

 

It did, however, change the nature of the work of the translator. Translation turned more and more into proofreading and editing, away from pure translation. That was a rather slow evolution, and in all likelihood, it will remain so for a very long time.

 

robots