and their concept of gaia hubs is the association of immutable identities with user owned storage without the need for IPFS pinning or otherwise data duplication: docs.blockstack.org
There happens to be a great solution for this known as gaia hubs: https://docs.blockstack.org/ which you can learn about in these docs in conjunction with the immutable identities associated with them.
and they brought up some other usability issues not mentioned in this critique, particularly about file pinning to keep files alive, and in general the waste behind the idea of data duplication by not separating out the auth of the user owned data with the duplication of the data itself in the DHT network, for which Blockstack has "Atlas".
I don't think you understand. Congestion pricing explicitly exposes the costs of the streets. Price spikes due to congestion create market signals for competition to come in and make it cheaper.
Right now, you have no clue where your taxes go, nor do you get a cent by cent breakdown of how much went to each cost.
There is an enormous amount of economic waste in state and federal budgets before considering 10yr government contracts where incumbents almost always win because of the exorbitant amount of one off acomodations made during the previous decade, but most u.s. government contract bids by law require to entertain 3 bidders before most of the time choosing the same one.
Also, by the way, uber works this way. The price will go up for 5 drivers when 100 people need the same 5 drivers as opposed to if 3 or 5 people need them. Alot of economic incentives are made here:
1. More drivers will come to the space because the pay is good.
2. People who can't afford it arent going to pay this congestion price making it possible for drivers to even attempt to prioritise etc. It also helps drivers prioritise. Do they really need to drive. It also helps competition come in and make up this economic gap if there is one.
Also, NY energy pricing market, as well as California, Texas and MISO region all operate on real time energy markets using "congestion costs" as one of the frameworks to expose pricing based on demand.
The only difference is:
1. You actually get to see an objective algorithmic breakdown of WHY the cost is what is. Try asking the government for that now for street costs....ha
2. The transparent pricing exposes market signals for competition to come compete, for a lower price, making a more economic system
both which over time
1. lower prices for consumers (in this case people who use the streets)
2. Create a more efficient system (if a road system is inefficient, and that is the root cause) as a result of 1.
In the long run, this is a positive move in the right direction towards governments being more transparent about where costs are allocated. It's a far cry from being able to vote by knowing where your tax money actually goes, but it's a step in the right direction and it doesnt completely divorce government from competition to keep exorbitant costs in check on the government side.
It is true that privacy and data ownership are two separate ideas that are often conflated. I think the user having more control over what happens to their data, empowers them to decide who has access to what, and where they choose to host their data.
For gaia hubs, we enable users to host their own data wherever they choose, revoke access to apps writing to their data, to delete data, or keep it and render it to another application whitelisted to interact with their data. A work in progress, but I feel confident Blockstack is on the forefront of pushing this idea of data ownership, while also enabling an authentication protocol that associates an immutable identity with gaia hubs, to enable data privacy as well.
The immutable identity being anonymous is debatable depending on how the user chooses to identify themselves, but the authentication protocol enables app developers to choose whether to implement end to end encryption, as some cases might not be needed.
The reality is there is a catch22 associated with (someone elses comment) the "simple" solution which is to throw everything on a local hard drive. How do you share data with people care about, or companies you do business with dynamically with high and real time performance this way? We are trying to answer those questions at Blockstack with gaia hubs: https://docs.blockstack.org/storage/overview.html But I would honestly love to see other ideas similar and learn more about the eco system of people approach data ownership, where data ownership enables users to decide what they want to be private, or not.
I think the root issue goes back to your second sentence: the average user doesn't want to be empowered when it comes to data harvesting. In fact, the average user would probably be firmly set against it. They have nothing to hide, no complaints about the current system, and see no value in having control over their data. Perhaps even negative value in taking time out of their busy schedule to manage it.
The trick to turning the tide in the data privacy battle won't be finding the right argument or implementing the right policy - it'll be finding a way to communicate the magnitude and potential impact of this problem to the masses before it's too late.
"the average user doesn't want to be empowered when it comes to data harvesting"
I think this is an extremely presumptuous statement that writes off how little empowerement the average user has by conflating the ignorance about data privacy and ownership, and the lack of user friendly options they have around understanding their data and how it can be owned/migrated and who has access to what, with carelessness or otherwise now in tech known as "consent" or "accept and agree with the fineprint or don't use this platform required for work, modern day life, etc" or "allow this app that is in no way shape or form related to pictures, social networks or social communication access to your pictures, phonecalls, all stored data on your phone and access to your contact otherwise don't download"
I think the news shows it's pretty clear most people are not happy about how companies like facebook and others are using it, now that they know. They have not known for a while, and furthermore what alternatives do they have?
This idea is very similar to telling a person because they never knew this was being done to them, and that because also in addition they have no other options currently if they do know about it, that this equates to consent or carelessness/lack of wanting to be empowered. This is just false, and furthermore, a dangerous trajectory of thought to apply to any situation.
Are you guys hiring junior devs? This looks like one of the few sane applications of blockchain technology that actually would make the world a better place...
WRT to the relationship with IPFS, with IPFS you really need to find a decent number of other people to "pin" your content. Filecoin would solve the problem of how to ensure that people with whom you have no relationship actually do this.
You can see the apps utilizing this framework on app.co My personal favorite right now is debutsocial.app It's a decentralized social network where every user owns their own data using gaia hubs.
This was one of the things that really drove me to get a Drobo back in the day. I liked the idea that I could effectively just throw hard drives in it and it would run. And run it did. Slowly, but it worked!
Drobo looks cool. That is the idea with gaia. Right now our master branch allows you to docker-compose-up and use the admin api calls to configure "disk" to however you want.
The main motivation here was regardless of what images were deployed on optional cloud host providers, the default was set to local disk to support users actually owning this data locally.
Additionally, we support a variety of other drivers, like azure blobs, etc.
It is nice that the hubs are set up to interact with an API where developers by default are storing data in this decentralized way.
I will check out Drobo more.
"Slowly, but it worked!" making things work is important! haha
First, thank you for the first implementation when the app was just 1.1.1.1 Ive been using it for a while.
Not sure if you can answer this question, but are the performance benefits still there in conjunction with utilizing the VPN google uses to encrypt traffic with google fi? This announcement mentions they have 2x the latency in comparison to WARP, but did not mention specifically which google VPN technology (not sure if they have multiple) but I assume something mobile related since this is a mobile application.
If I use the WARP app in conjunction with google fi, am I layering this VPN on top of the 2x latency of google fi, thus slowing down WARP VPN to gain then the other performance benefits of optimized network switching of google fi?
Neither project is open source (that I know of) so it is hard to understand how the implementations overlap or not with one another. I also am not an expert in VPNs so maybe this is not a good question, but I find myself reading Cloudflare's blogs alot and couldn't help but ask.
Iβm not sure, and I think youβre kind of off-topic for this particular sub-thread, but weβll have a ton of performance data across a matrix of device, software, and network operators. And, when we do, weβll definitely publish it.
Compared to what kind of payment onboarding system? Outside of free cash, anything related to setting up a bank account, or a credit card can take up to 7-10 business days if not more, post an application to do so. Do you know of any onboarding system of payment that is faster? It seems like this is just a random statement by itself that is not in the context of all the other onboarding mechanisms of every payment system currently used. Care to calibrate Lightening onboarding with any other mass payment system? Even if you use square, paypal, venmo etc, they need to be linked to real checking accounts, credit card, other verified (with long onbaording) payment systems etc.
Furthermore, 2-3% of all transactions on many of these systems are fraudulent, but the payment systems ACCEPT This as an acceptable level of fraud and cover the expenses, in order for faster transactions.
Square knows for example it won't be used in coffee shops over another competitor if Square on average takes 20 seconds longer to process metadata in conjunction with the transaction itself to see if its an attempt/fraudulent charge, so they consciously cut corners here and accept this liability so we can all get coffee 8x a day.
This is why bitcoin has industry standard confirmation blocks so the network can address and blacklist attempts at doubles spends and keep a master chain.
This is basic blockchain 101 compared to standard knowledge of onboarding to every en masse payment system we all anecdotally have experience with layered digital transaction technology and the losses associated with that.
The lightening network is as mentioned above with links, rapidly evolving to expediate this while still maintaining the underlying benefits of transaction consensus that avoids fraud.
You may deal with fraudulent exchanges as you will do with fraud around any new technology (heard of the internet before? there were lots of sites for fraud, and much worse when it came out, and still are, we just all get smarter about what we click, and hopefully in this case, who we throw our money at, which you should be doing anyways for any new technology...) but the fraud here is associated with like top ranked comment said, not the protocol but the flurry of business activity around it, and is really null and void to the technical conversation itself, which is really what evolves communities like this past these handwavy statements...
I really don't mind constructive criticism of blockchain based technologies, but the issue is the conversation rarely evolved because of cargo culting statements like this that ignorethe very benefits of moving to a blockchain based payment system.
> Outside of free cash, anything related to setting up a bank account, or a credit card can take up to 7-10 business days if not more
You are forgetting concurrency. A bank doesn't stop processing transactions for 7 days while opening 1 account. Thousands, millions, or more accounts can be opened concurrently. If account demand skyrockets more branches can be added, more tellers hired, banks can compete on efficiency of service.
Bitcoin's 7 TPS rate limit is global and immutable (without a fork). You cannot add more nodes to satisfy demand. You cannot add more hash power to satisfy demand. You get 7 TPS or you try to get consensus on upgrading (which almost always means a fork, which ironically is an extreme form of fiscal stimulus as it doubles monetary supply).
you don't have to do it in the street...but yes this is true for many exchanges if you are wanting to take an existing currency and turn it into another one, or if exchanges are how you are looking to transfer money, you can also receive btc without ever having bought it, and send it to addresses and never use an exchange.
Regardless, your point only further proves that the bottleneck for onboarding users to lightening en masse is, if a problem at all, not one unique to that network, or any other payment system we currently use.
My point is that since its impractical for most people to use bitcoin without a bank or credit card it's not really a viable alternative since the system it is attempting to subvert is still necessary. If you're an enemy of the state and need to move your money covertly, bitcoin offers some limited utility, but for the vast majority of people it is a waste of time.
You forgot that this initial on boarding transaction is just a tiny step in the complex process you have listed above but this tiny step is already a significant barrier that prevents more than 200 million users to open a channel per year. This is already assuming that those channels never have to settle and no other transactions happen on the blockchain ever.
"are involved with Kubernetes, etc. None of that stuff probably is even close to working on Fuchsia and I don't think server side is a priority with that."
actually the networking stack in Fuschia, or more particularly zircon the kernel it is optimized around, is written in Go.
Kubernetes is written in go, which is essentially a system for implicitly defining scaled linux boxes on a giant network, and now in the most recent release is allowing for declarative architectures as well, much like how user space services for graphics drivers in Zircon will enable more declarative and optimized use of expensive resources as well (in this case, modular hardware architecture for mobile operating systems, which is what Fuschia was originally envisioned for.)
I'm also pretty sure Google also has experience with mobile operating systems (android), and Fuschia is literally a response of now over a decade of trying to interface android on POSIX in conjunction with multiple chains of monopolized and closed source hardware architectures, being intimately familiar with the evolution of trends in the mobile hardware architectural space, and trying to rewrite a kernel to optimize for that.
"Fuschia will need to play nice...Part of that is supporting linux based stuff; which especially on Chrome..." Chrome had to have an entire team dedicated to resandboxing their tabs to mitigate for spectre and meltdown, which are both results from the unquestioned but growing obscurity due to unquestioned implementations between hardware and Linux integration, which is something that Fuschia attempts to take a step back from, and by doing so simultaneously make it easier for open source development on hardware architectures while optimizing for them.
Still, total site isolation is still an advanced option in chrome that results in a 10% degradation in memory performance so people don't turn it on, if they even know to check for it/what that means (most people don't).
Crucially kubernetes uses docker, which uses a lot of low level linux and posix stuff. Kind of an issue if that stuff isn't there. Kernel virtualization on fuchsia could of course become a thing (like it is on Windows and OSX) but so far I've not seen much on that topic. WASM on chromium might become a way out as well of course.
But the bottom line is: Empty room. beautiful OS with an empty app store is kind of a non starter in the current market. Windows Phone found that out the hard way. Also, several other mobile operating systems that did not quite make it or continue to struggle. Sailfish, WebOS, Firefox Mobile, Ubuntu Mobile, etc. Even ChromeOS struggled until they added android support and Crostini.
I have no idea what ART is but I doubt it addresses all concerns I listed above. On Android, there are plenty of native libraries and apps these days as well. I'm pretty sure these don't work as is without a compatibility layer that essentially replicates a lot of linux/posix stuff, which Fuchsia does not implement.
In any case, it wouldn't be the first time that Google walks away after putting lots of development in something. In general, I think they are closer to merging Android and Chrome OS than they are to replacing either with Fuchsia (not to mention convincing OEMs like Samsung to actually use it).
Since Android 7, Google has been claping down NDK users that try to used anything that isn't part of that list.
Since Android 8, APKs are only allowed to reach for their own internal filesystem and use SAF for anything else. Something that will be further enforced on Android Q, so no luck trying to peek into /dev, /share and similar.
As for virtualization, Fuchsia already has its own KVM equivalent called Machina which so far can run Debian on top of Zircon and with several compatibility changes for supporting the ART runtime in Zircon already merged in, it should be also possible to run Android apps with this.
But perhaps the reason Fuchsia won't struggle unlike the other OSes you mention, is that it is possible that it will be compatible to run all the Android apps in the play store from day one, thanks to Machina; allowing a smoother transition, a process similar to what Apple did with the PowerPC to X86 switch but in Google's case, its for a completely different OS.
there is also rkt, and anyways, while docker containers are great, they are just an abstraction of cgroups and namespaces, yet you forget that cgroups are a relatively recent concept in Linux and docker containers didn't even have namespaces in its first, second of third iteration, yet you act like docker relies on the immutable principles of posix.
Anyways, docker is a good example of how current linux systems are not optimized for modular sandboxing and containerziation. Still, people are so uneducated even in tech on how important this idea of only working with bare bones (I started in C so allocated bytes as I needed them and always considered how not to use them first, is a far cry from npming an express server and seeing the endless train of dependencies that are invoked) that still they do not secure their containers, and the number of ubuntu18.04 std base images I see running a docker container that simply contains a python app or something equally trivial, live in production at some of the top tech companies, which you can google and download a rootkit for, with no linux hardening whatsoever, is the terrifying norm of centralized web application companies today. I really am not going to buy into this idea that docker contiainers baring full replicas of the operating systems they sit on top of are a justification for POSIX.
If you want increased modularity for security, sandboxing and running different application, look at QubesOS, which is already far along and has it's own baremetal hypervisor, which is much like how docker works in userspace but optimized all the way to bare metal hardware. Fuschia takes a similiar approach when looking at optimizing modularity in mobile computing hardware architecture.
" Also, several other mobile operating systems that did not quite make it or continue to struggle."
This is true, but this is coming from the same company who has experience designing both software and mobile hardware architecture. Just because something is not already popular and widely adopted isn't a reason not to do it. I'm always an anti monopoly person myself, especially in the world of technology.
You can read my other comments and see the justifications around the need for this. As someone coming from the hardware architecture design space for qualcomm snapdragons all the way to 14nm iphone architecture, there is a need to remodularize kernel for advanced execution and increased competition in this space. POSIX is not sustainable looking 10-20-40 years into the future of hardware computing, particularly in the next ten, and android game developers who make a living off of candy crush do not really seem to care about this impending doom, only that they will have to traverse yet another learning curve if the platform gains adoption or become competitive in the space, which sucks, but it's not as bad as you'd think.
Besides, forcing people to continue to traverse learning curves keep the market competitive and keeps people from becoming to religiously entrenched that google's current android API is an unattested god. Yeh, its hard to make money sure, it's competitive sure, but we so easily forget android was a first stab and open source response to the iphone. The first motorola android phone came out my sophomore year in college. Now people can't imagine how we would survive without their 6yr olds going to school without an iphone 7. We often assume we need things to be the way they already are, and are not accustomed to change, but I can tell you Fuschia is needed in the space, operating system competition is needed in the space, and in the next decade we will be evolving to think different about excessive use of memory, dependencies and latency and be looking for something like this, luckily it will be about a decade into development at that point, about what android is now, and people say it's unreasonable to consider anything else..
I was learning about kernel dev around the same time as trying to understand how conditional execution and speculative execution worked as a result of really trying to understand every step that happens when a system call hands something to the kernel and the kernel does something with it and hands it back to the system.
I kept asking but alot of supposed linux nerds I spoke with couldn't tell me how the kernel and user space truly handed off data or negociated memory with eachother, leaving me drawing out trap handling routines on a posterboard penciling in gdb dissassembles of memory for system call source code, feeling dumb for not knowing, meanwhile we all find out about Spectre and Meltdown and that really there is not a secure handoff without significant performance degradation and/or increased sandboxing for things like the browser, etc.
And of course what is the root of the issue here? The root of the issue is that linux is too deeply integrated into monopolized hardware architectures, which is perhaps why AMD's stock price skyrocketed the day Spectre and Meltdown came out, when we found out the only mitigation for this legendary security vuln in the near future, will cost a 30% reduction in performance across the board with intel as opposed to much less with AMD given the AMD architecture was less prone to exploiting the vulnerabilties around speculative execution.
The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems. The networking stack in Fuschia is written in Go for example. While I don't know much about Go, can it be worse than C when it comes leaving it up to almost every developer to take care of their own garbage collection and what the performance and security implications of this are?
Magenta is designed to be modular enough in nature to withstand the waves of hardware architectural evolution coming and given we are approaching 5nm development (the theoretical limit of how small a transistor gate can be before we can no longer control interactions/flipping a transistor switch due to quantum interaction), and this is not far off, Intel already has 10nm in production and probably others now as well (its been a bit since I checked) then to quantum computing:
Because quantum computing (this is debatable and I know the least about this) is not ready for mass production, particularly on the mobile scale, my conjecture is once we reach the theoretical limit of how small a transistor will be, designs will turn to optimizing for performance in every other way we can without relying on powerful processors to accomodate for memory bloat or endless dependencies (yes I also pray this requires javascript modules to be better or die out but thats a long range dream).
Meanwhile AMD gains ground post spectre and meltdown. So, in summary, there are alot of other options to consider than just optimizing for POSIX forever.
Therefore, I am glad there is a push to explore alternatives. I feel as though anyone who thinks it's not potentially beneficial to explore POSIX alternatives based distros does not work with Unix based systems in any kind of depth on a daily basis, but if someone does, and you think Linux for example, is the best operating system in the world and can't be improved upon outside of its defining protocols, then I would love to hear from you on this thread. I am not nearly as experienced as most people who work with Linux, but I can say that most I have interacted with it view it as a love hate relationship for many of these very reasons.
You can also see this trend of unhappiness with Linux OS defaults out in the wild outside of google.
More and More and serious applications are looking to bypass userspace application development to be either more secure, customize, most often for the purpose of if not security, to optimize performance for the things we use to consider the std linux kernel somewhat good at.
Here are few varied examples I can think of off the top of my head anecdotally when trying to solve everyday problems for users with linux, but I am sure there are many more:
2. Wiregaurd is an example of a VPN where communication negociation is handled more and more in the kernel, because traditional vpn designs have left TLS handoffs in userspace (what is the point of userspace anymore for serious application development when this is the trending security default): https://www.wireguard.com/
3. Sysdig implements epf functionality to allow for sysadmins and devops engineers to customise and or secure in ways we don't trust or consider the default linux operating systems userspace/kernel space design to do anymore: https://dig.sysdig.com/c/pf-blog-introducing-sysdig-ebpf
> The more I learned about these things plus issues with other basic functions like wait() ot strcpy() or in general the lack of protections around C, the more I entertained the idea of looking for alternative operating systems.
Dig into the worlds of Burroughs B5500 (now Unisys ClearPath), IBM OS/360 (now IBM z), IBM OS/400 (now IBM i), and the now gone Mesa/Cedar, Oberon, Active Oberon, SPIN OS, Topaz OS, Mac OS/Lisa, Singularity, Midori, Inferno, ...
and yet you are still alive and not starving to death. But the banter I see on here is android video game developers complaining that a move away from android will be the end of them.
Google is not stupid, they are not going to deprecate android overnight and replace it with Fuschia, this operating system has been in the works open source, you can see the commits on github for atleast two years I think more, and there will clearly be many iterations of its development to come with increasing adoption each time as people make money on the platform, just like with Android which took years before it reached the threshold of 50% use compared to iphones and no iphone video games developers that I know of starved to death trying to adapt to this change. The drama on this thread about api changes are significant for sure and I understand Google redacts API's or suddenly starts charging for them in a way that makes small companies close up shop overnight (like google maps for example) but it is not a justification to pretend that objective limitations around Moore's Law and the need for competition in computer hardware is forcing companies who have experience in both spaces to reconsider kernel development at a more fundamental scale.
which is why I'm confused about all of the top ranking comments complaining that android will change their API for this. Will this require a change for android app developers if this is the case? Regardless, this seems like a more fundamental layer of improvement.
Burroughs B5500, first OS written in an high level systems language (ESPOL, later NEWP) in 1961, 8 years before C came into existence. Already used compiler instrics instead of Assembly, and the concept of unsafe code blocks.
IBM OS/360, famously introduced the concept of containers, alongside IBM OS/400, also has language environments, think common VM for multiple languages.
IBM OS/400, originally written in a mix of Assembly and PL/S, uses the concept of managed runtime with a kernel JIT called at installation time, and uses a database as filesystem.
Mesa/Cedar, system language developed at Xerox PARC, using the same IDE like experience similar to their Smalltalk and Interlisp-D workstations. Uses reference counting with a cycle collector.
Oberon and its descendants, Niklaus Wirth and his team approach to systems programming at ETHZ, after his 2nd sabaticall year at Xerox PARC.
Mac OS/Lisa, these first versions of Apple OSes were written in Object Pascal, designed in collaboration with Niklaus Wirth, whose extensions were later adopted by Borland for Turbo Pascal 5.5.
Singularity/Midori, the research OSes designed at MSR, largely based on .NET technologies.
Inferno, the actual end of Plan 9, using a managed language for userspace, Limbo.
SPIN OS/Topaz OS - Graphical workstation OSes for distributed computing developed in Modula-3
and their concept of gaia hubs is the association of immutable identities with user owned storage without the need for IPFS pinning or otherwise data duplication: docs.blockstack.org
You can see the most recent conversation about it here: https://forum.blockstack.org/t/cannot-find-ipfs-driver/6147 in which I convinced someone looking at IPFs to use gaia instead
full disclosure, I am an Engineer at Blockstack.