I know this is just a troll. Nevertheless: Iran was always known to enforce some level of covering a womanβs hair. There are other Islamic countries that do so, some more strictly then it has been the case in Iran. And there are plenty which donβt, like Egypt, Turkey or Marocco.
I have a hard time imagining what that means but it sounds like it could be helpful for a project at work. Can you point to a source that elaborates on error propagation on rpcs?
It means if thread A does an RPC to a remote service, and the thread B in that service that's handling the RPC takes an exception, the details are serialized and returned back to thread A where the exception is rebuilt and rethrown, ensuring that in a platform that supports chained exceptions and blocking threads efficiently (e.g. JVM) that you get exceptions and stack traces that cross services.
It's not hard to implement, it's just that the industry kinda gave up on using powerful RPC frameworks with language integration. gRPC is about the fanciest you'll get and that is a clone of Stubby, which was originally written for C++ in a codebase that banned the use of C++ exceptions entirely. So it doesn't try to solve that problem.
> the industry kinda gave up on using powerful RPC frameworks with language integration
The trouble is that ties both ends to the same implementation language. Web-style RPC became popular because its simplicity meant it wasn't tied to any particular features of any particular language.
Yes, and simple textual protocols that the browser's limitations made ubiquitous.
I hope that at some point someone standardizes a way to pass virtual stack IDs, exception data and other RPC metadata via HTTP headers so we can get back the sort of features you find in high end RPC stacks.
Do you know of any methods not tied to a particular RPC framework that attempts to solve this problem? I imagine that you could try to implement something of the sort using OpenTelemetry Tracing at least, although the need to (presumably) reconstruct the distributed stack trace from a separate store is a deficiency of this approach for sure.
Not off hand, there's some JSON problems spec I found once but I don't recall if it had anything for stack traces specifically.
It's not hard anyway. You can just upgrade server frameworks to render the stack trace to either a string or a json data structure (or protobuf), and when an exception bubbles up you capture it, convert it to that structure, compress it, possibly encrypt it with a shared key (just to avoid leaks in case there isn't a proxy stripping the special header), base64 encode it and stick it into a special header. Then your HTTP client can be taught to look for that header in case of a 500, decode it, turn it back into an exception and rethrow it.
If you have a unified framework like Micronaut it's probably the work of an afternoon to throw together. The hard part is the crypto. The moment you introduce keys, corporate security teams will insist on things like frequent rotation even if it's just a backstop against badly configured frontend servers and not mission critical (if you leak a stack trace, ok, not great but not the end of the world). So it might be better to obfuscate in a different way that isn't meant to be fully secure, or detect if a request comes from an internal IP in a secure way, etc.
I got a paper rejected by phys. rev. E :/ a paper of which I thought I might contribute to advancing the available methods for analysing Synchronisation. And while I donβt doubt that they also put a lot of effort in their paper and I love that stuff like this exists and they followed through with their experiments, it seems like they have done it for the lolz. The rejection hurts a bit more now.
I second that. I see it as something to build intuition for "complex" problems. To make this more concrete: If you want to study the brain you can go the "biology" approach and describe neurons really well and build mathematical models for all the neuron types. Or you could do it the other way around with the "psychology" approach and put people/monkey/rats in an MRT. Both ways you learn important stuff but it will be hard to connect both worlds because simulation of enough neurons to predictive power over the outcome of an MRT is probably far fetched (although there are the human brain project or its US counterpart, the brain activity map project which attempt to do something like this). Complexity theory might help to learn how to close this gap. Things like synchronization (http://www.scholarpedia.org/article/Synchronization#Chaotic_...) or Self organized criticality (form the critical brain theory) could help distinguishing which parts of neuronal dynamics are due to biological restrictions and which form the function of the brain. With this knowledge one might be able to "dumb down" neuronal models enough to make large scale simulations without loosing to much of the processing dynamics.
You might still not have predictive power then, but then again, complexity theory might help you to understand what the limitations of your approach are.
The same intuitions could be applied to other things. Large scale power grids are also often hard to predict when not moving into a sure fail state. Being able to analyze how you stabilize these systems without basically dumping a lot of money on them is the way to go (Looking in the past, the money will probably not be spend).
You could study the behavior of crowds and maybe make estimations on the safety large conventions build a "panic index" that calculates the risk of having something like at the Loveparade https://en.wikipedia.org/wiki/Love_Parade_disaster . Again - you would not be able to make a precise prediction of whats gonna happen but I'd say it'd even knowing if you have like a 0.5% Chance of a disaster would be worth knowing. (Of course, there are effective methods taught to prevent this disasters anyway. But sometimes you have new configurations that didn't occur in the past and you might catch these things with a simulation. I could be an additional approach)
Could you elaborate on the power grid idea? It seems like a good example of how to apply complexity theory to make hard decisions, in this case optimal investments in grid stabilization.
For example they report on a project where they used photovoltaic panels to stabilize changing power consumption in a power grids with minimal changes in the existing structure - something that is hard with traditional power plants since they basically have to much momentum for quick switching action. https://ieeexplore.ieee.org/document/7007647
Also, agent based simulations (building on the idea of self organized criticality) are a thing. With these setups you can test power grids on their reaction for certain failure types - something you don't want to test in real life. I assume these simulations would also be quite accurate since consumption and production should be well known, as well as the physical properties of transmission.