Well, a panic arose on Twitter when conservatives discovered their crashed smart TVs started demanding the sacrifice of children... must have been all the daemons possessing our computers:
maybe we can have some keyboard plugin/"mode" that merges letters to form symbols in special way?
Korean windows have a "special-emoji" keyboard that can be invoked by: 1. type a charactor 2. press 'hanja' key 3. (wild special character selection menu appears)
Most Linux environments already have this, it's called "compose key". You may need to enable it in your desktop environment's settings, but then after pressing the key you can press !? to type ‽, 12 to type ½, <= to type ≤, n~ to type ñ, <3 to type ♥, etc. Here's GTK's list: https://help.ubuntu.com/community/GtkComposeTable
It would be nice if this were built into Windows, but in lieu of that, I have been using WinCompose[0] for several years for typing diacritics (as part of my learning French). For example:
* ALT e' = é
* ALT u " = ü
* ALT n ~ = ñ
It's not limited to diacritics; you can type ligatures (ALT ae = æ), extended characters (ALT [/] = ), I assume the majority of UTF (ALT #G = 𝄞) and so on. And yes, even emoji (ALT ALT alembic = )
edit: apparently, HN won't show the checkbox (U+2611) or alembic[1].
> It would be nice if this were built into Windows, but in lieu of that
Something that's basically equivalent is builtin. Just switch your keyboard from EN-US to EN-INTL, and now several accent characters become dead keys, so ' e = é, etc.
For Windows, the ENG-INTL keyboard layout is pretty good for simple accented characters, covering latin-based European languages [1].
Windows+. also allows you to just type to search for emojis, so e.g. typing WINDOWS+. sad ENTER gives you a sad emoji. Sadly this doesn't work for symbols, even though the Windows+. menu has a good coverage of symbols.
I guess you can always switch your keyboard to Korean input.
Julia does this in a pretty simple way, you type \lamdba then press tab and it becomes λ, emojis and many other things work like this, so typing them is very wasy.
Here's a quote from Reynolds, one of the creators of Spark:
> The primary issue I can think of comes from the lack of pattern matching. Kotlin’s language designer left out pattern matching intentionally because it is a complex feature whose use case is primarily for building compilers. However, modern Spark (post Catalyst / Tungsten) look a lot like compilers and as a result the internals would become more verbose if built using a language that doesn't support pattern matching.
but scala's cousin kotlin doesn't seem to suffer from this a lot... kotlin library-packages don't have kotlin-version to their download link... maybe I'm missing something?
for most simple HTTP webservers, python maintains somewhat OK-ish source/binary compatibility between python-versions. But my experience with scala was things breaking between scala-versions / waiting for some library to compile with next scala-version...
I'm surprised why scala couldn't solve it better than python -- I mean, scala's a compiled language, so it should have more wriggle-room...
Scala’s type system doesn’t map perfectly to that of the JVM, so they had to bend the latter to implement scala. That’s similar to how C++ compilers bend the rules to confirm with C linkers (https://en.wikipedia.org/wiki/Name_mangling#C++), but much more complex because of the richer type system of the JVM.
Scala developers aren’t willing to freeze that mapping, partly because they found out better ways to do such mappings, and partly because they keep changing the language, changing what was the best way to do that mapping.
I think it’s easier for interpreted languages to keep their internals compatible. They are willing to give up some speed for convenience, so even if they think “I wish we had done that differently”, the pressure to change it isn’t that high.
They also keep more metadata around. In some cases, that enables them to discover “this is using the old way to do Foo”, and fix that up to use the new way.
it is intentional. scala maintainers want to keep the language evolving faster than python/java. if you look at scala3/dotty, it actually is a very different language than scala2
It's more like, Rust wants to make guarantees that just aren't possible for a block of memory that represents a world-writable file that any part of your process, or any other process in the OS, might decide to change on a whim.
In other words, mmaped files are hard, and Rust points this out. C just provides you with the footgun.
The problem is that compilers are allowed to make some general assumption about how they're allowed to reorder code, always based on the assumption that no other process is modifying the memory. For example, the optimizer may remove redundant reads. That's a problem if the read isn't really redundant -- if the pointer isn't targeting process-owned memory, but a memory mapped file that's modified by someone else. Programs might crash in very "interesting" ways depending on optimization flags.
C has this issue as well, but Rust's compiler/borrow checker is particularly strong at this kind of analysis, so it's potentially bitten even harder.