Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Janus WebRTC Server (meetecho.com)
218 points by simonpure on May 31, 2020 | hide | past | favorite | 63 comments


Having to look into this professionally for local/remote streaming solutions and came across this paper in the last couple of weeks which has been a huge help to understanding my use case:

http://lup.lub.lu.se/luur/download?func=downloadFile&recordO...

One of the most useful/interesting use cases to me is the ability to have a PTP encrypted stream without having to go through weird IoT PKI hoops.

Ninja edit: If anyone has experience with Janus and/or WebRTC on edge devices I would very much like to talk as I could really use a solid consultant in this realm.


I do WebRTC on Edge/IoT devices (mostly MIPS/ARM devices running Linux). Customers are mostly teleoperations (robotics) and security cameras.

Most customers run an MCU/SFU on a server, but then just a WebRTC client on the device. We do simulcast on the device to an SFU, and then distribute from there. Happy to answer questions here or directly.

I don't want to be disrespectful and sell other stuff on this thread though. I like seeing people realize how great Janus is :) don't want to distract from that conversation!


I'm concerned in targeting things like Janus etc to the MIPS architecture for streaming over WebRTC because typically I only see MIPS on legacy devices in my world. How does MIPS handle this stuff?

PS: I know that Amazon has a product in the space and we're vetting AWS as our cloud provider due to it's diverse product offering. I've been looking at opensource solutions due to vendor lock-in but would love to hear if you have any experience with the Amazon offering!


I wrote the Amazon offering! By design I implemented the same PeerConnection API, I really didn't want their to be vendor lock-in. I included a 'signaling client' in-tree, but you can do your own easily. The end goal is to get that AWS implementation running on 'true' embedded devices. We are going to switch to mbedtls soon, and we are working on getting it on FreeRTOS.

I also wrote Pion WebRTC, the Amazon offering is just a re-implementation of that in C. Just trying to decouple media pipelines and transport. I think WebRTC is a really great protocol, hopefully we can get software to match it :)


MIPS is still alive in the IP camera world. There exist very cheap SoC's (e.g. the Ingenic T20 - http://www.ingenic.com.cn/en/?product/id/14.html ), tailored towards making cheap network camera's (~ €20 retail price for the full camera). I guess at that price point, the ARM license fee does become visible in the bill of materials.


Incidentally the Allwinner F1C100s used in this business card (https://www.thirtythreeforty.net/posts/2019/12/my-business-c..., https://news.ycombinator.com/item?id=21871026) contains an ARM9 series core clocked at 900MHz alongside 32MB of on-die DDR1, is designed for dashcam-type applications (SDIO, LCD, USB2 OTG, no PHY) and the author of the linked article was able to buy them for $1.42 each.

I've long been curious about MIPS from a hobbyist/tinkerer/maker perspective and would be very interested to know what silicon I might select at a similar price point.


Are there resources you might recommend on using WebRTC for the teleoperations use case?


BTW, here's the best I found so far (in terms of how simple it makes everything):

https://pypi.org/project/rtcbot/


If you are interested in cross-compiling the WebRTC implementation so it runs directly from within edge devices, Janus might be one of the best options out there.

Kurento offers a variety of features apart from WebRTC, but it is more intended to be cloud-deployed as an independent media server, you could think of it as a "proxy/bridge" that distributes media between producers (like RTSP cameras) and consumers (WebRTC clients).

If those features are not needed, for purely WebRTC routing there's also mediasoup, with the same philosophy: N media producers send video to a central server, and the server distributes it to M devices consuming the media.

True peer-to-peer comms (i.e. sending data directly from producers to consumers, without an intermediate centralized server) was a cool thing to aim for by the WebRTC standard, but real-world practical constraints (i.e. bandwidth) put a limit to the usefulness of a truly p2p architecture. You'll find more info about this searching for MCU and SFU.


I've been vetting solutions in the space and I think I have a need for both...

I'm having a hard time figuring out direct streaming from device on your local LAN without having to go out/back-in through the public internet with a hard requirement of an encrypted "point-to-point" communication (ie: can't listen into streams) which is browser trusted + the ability to ship that out to a "proxy/bridge" media server solution like you describe.

I've looked at EvoStream and Wowza as licensed solutions and I am super interested in the ability to use something like Kurento with ideally self-signed certificates to a cloud-deployed/redundant media bridge that then in-turn broadcasts to the remote clients who are consuming via the public internet.

The OpenCV tie-in with Kurento would be of great interest to me as well. That could be a game changer for what I'm currently embarking on!

Would a hybrid approach make sense to you as you're definitely an expert in this space? (ie: using Janus on-edge and Kurento in the cloud at the same time)

PS: I apologize if my questions/communication is rough... I'm two weeks into a nightmare trying to sort all of this out so I'm still coming up-to-speed on the technical details of WebRTC + secure streaming!


You can deploy Kurento (or Janus itself, for that matter!) inside a subnet of the LAN, to act as a media bridge. Then, if the security group / firewall / whatever is configured so it allows the media bridge public network access, then it could be used to send the data directly to remote peers.

This however would have the bridge in the middle of LAN connections, so it wouldn't be a point-to-point flow, and the server would be able to "see" data passing through it.

There is however some new work on what's called "insertable streams" in Janus, which would allow e2e encryption directly between sender and receiver, thus having the media bridge purely act as a "blind" router, not being able to see the streams passed through it.

Janus has some very recent support for this [0], while this is (for now at least) not standardized and out of scope for Kurento.

[0]: https://www.meetecho.com/blog/janus-e2ee-sframe/


I’ve also tried to setup Kurento and Janus and Janus is 100000000000% easier


The Google's WebRTC Native is a good option too. It's basically a piece of the Chrome browser.


Out of curiousity: What is an edge device? An IP camera? Depending on the professional grade of your use case, you could directly talk to meetecho, as they are the original developers and offer consultancy around Janus.


IP cameras mostly, some remote-audio stuff - I think that may be a logical next step for me so I'm going to bring that up to leadership tomorrow.


I used Janus to map my IP camera's stream to a WebRTC page using a Raspberry Pi's hardware transcoder. It works quite well.


I'd be super interested in seeing what you've got! Is your implementation posted anywhere?

I really like your Stylus project btw!


There's two parts to the implementation. First part is a docker container that will transcode a stream for Janus using the hardware on the Pi:

https://github.com/mmastrac/gst-omx-rpi-docker

I run this with the following config (just remember to map the RPi /dev/vchiq device into the container!):

  gst-launch-1.0 rtspsrc location="rtsp://admin:(password)@(host):554/cam/realmonitor?channel=1&subtype=0" latency=500 ! rtph264depay ! h264parse ! omxh264dec ! omxh264enc target-bitrate=500000 control-rate=1 ! video/x-h264, profile=baseline ! h264parse ! rtph264pay name=pay0 config-interval=1 pt=96 ! udpsink host=(janus) port=8004 sync=false
The second part is the Janus configuration magic that creates the appropriate stream:

  [gstreamer-sample]
  type = rtp
  id = 1
  audio = no
  video = yes
  videoport = 8004
  videopt = 96
  videortpmap = H264/90000
  videofmtp = profile-level-id=42E01F\;packetization-mode=1\;level-asymmetry-allowed=1
  videobufferkf = yes
> I really like your Stylus project btw!

Thanks! Been working on Stylus a bit more this weekend. Nearly have all the features I need for my own setup.


I'll chime in to comment that WebRTC, as complex as it is, is of course usually just one part of the equation, and arguably a small one.

You'll have to correctly deploy your servers, a TURN server to aid with ICE (NAT/Firewall traversal), and configure it all appropriately in both server and client browser applications. It is surprising how many people have problems with getting ICE, STUN and TURN servers, to work correctly, understandably because it is a complex topic.

Then once you have the basics of streaming media over the network, and your application developed (think video-based customer support, education conference, any stuff for which WebRTC is a good choice), there are still a myriad of things to worry about for a production-grade service: user permissions, autoscaling, metrics gathering, dynamic distribution of all the video streams through multiple media servers in order to accomodate for varying loads, etc.

While developing Kurento, I also work with the team that makes OpenVidu, a project that builds upon Kurento and aims to provide an all-in-one solution to all these problems, and handles all complexity of WebRTC for you.

Have a look at it if you need more than just the basics offered by media servers such as Janus, Jitsi, mediasoup, or Kurento itself:

https://openvidu.io/


For those that are interested in Janus and WebRTC in general, I highly recommend reading Lorenzo Miniero's (creator of Janus) Ph.D thesis [1] on the project and the state-of-the-art of WebRTC. It's brilliant and enlightening. The Janus project is a really marvel of engineering.

[1] http://www.fedoa.unina.it/10403/1/miniero_lorenzo_27.pdf (PDF)


I didn't know anyone else apart from me and and my tutor had read my Ph.D Thesis... :-) Glad you found it informative! Janus was indeed born during my research efforts there.


Janus is amazing, very easy to use. I used to develop MCU/voice control Systems but unfortunately for large companies you really need a dedicated team to manage this Infrastructure. (scheduling, video/voice quality issues, among other things) good if you have the resources to do it


Newbie here,

Can someone explain why webrtc needs a server and how Janus fulfills this role?

As far as I understand webrtc is meant to be peer to peer with minimal server signaling, so what role does a WebRTC server play?


These are the big points I see for servers, I am sure there are more though!

* Less resource usage for users

If you do mesh signaling every user connect with each other via P2P. This means if you have a 4 person conference call everyone needs to upload their video 3 times. If you have a media server each user uploads only once, and then the server distributes the video. This means a lot less CPU and network usage for each user.

* P2P Connections reveal details about the user

If users are connecting directly to each other they are able to figure out details like their public IP. If you route everything through a server you can anonymize more things.

* Protocol Bridging

People want to view RTSP/RTMP/$X via WebRTC. A media server is the only way to make it happen.

* Less variability to deal with

When doing P2P connections you will deal with a lot more variables. It will be harder to figure out which user's internet is causing issue, or debug encode/decode issues. A few times running a SFU has come really in handy because I was able to debug something that would have been impossible when just doing P2P.


Just finished setting up a Janus server while building a web application in the education space which needed video chat and the ability to record the videos.

Absolute dream to work with! Thank you Lorenzo and the rest of the Janus team. Hopefully we'll get the chance to make it down to Janus Conf!

If anyone is looking for help adding it to their app hit me up!


I am currently working on a project to solve my own needs around this. If anyone wants to collaborate or chat about it please e-mail me. I've done some work around the Xiaomi cameras with the custom firmware so love to talk to others about what they want to see.


I hacked together a proof of concept of using Janus to wrap the H264 RTSP stream of the Xiaomi Dafang into a WebRTC stream without transcoding, giving close to real-time streaming of the ip camera to browsers without needing a plug-in.

Currently, Janus and Nginx (https/auth) run on a cheap VPS, and a device in the NAT network of the ip camera creates a Wireguard tunnel to get the RTSP stream to the VPS without needing to open/forward ports on the NAT.

Ideally, more of the components could run on the camera itself, but I haven't gotten to cross compile anything to the MIPS cpu of the Xiaomi camera. Will contact you so we can chat.


We use Janus as a WebRTC SFU for projects in the education/gaming sector. Its general purpose approach to WebRTC has been a good foundation to help us build custom solutions.


Did you write your own plugin or are you using VideoRoom or something? The thing that turned me off of Janus is that VideoRoom seemed way too high level and made tons of schema assumptions, but all of the important SFU functionality seemed intertwined into the codebase of that plugin, so if I were to actually want to build a "custom solution" it looked like I would have to maintain an annoying-to-merge fork of that thick layer :/.


Janus itself doesn't make any assumptions about a specific use-case. All the functionality outside of the core RTC stuff is implemented with plugins. The default "Video Room" implementation is just a Lua script [0]. Mozilla has written their own SFU plugin (in Rust) [1] for game networking that powers Mozilla Hubs [2].

[0] https://github.com/meetecho/janus-gateway/blob/master/plugin...

[1] https://github.com/mozilla/janus-plugin-sfu

[2] https://hubs.mozilla.com/#/


No: the vast majority of the VideoRoom functionality is written in C, and it is where all of the actually-hard-to-do video SFU stuff -- like quality control feedback and SVC support, particularly in an end to end encryption context with all the codec-specific workarounds--is commingled together with the notion of "rooms" (which is a really awkward and specific high-level abstraction with a schema that you have to abuse for basic use cases).

https://github.com/meetecho/janus-gateway/blob/master/plugin...

Yes: if you don't want any of the complex video functionality, you can easily write your own Janus plugin, and that maybe sounds reasonable for some trivial game "SFU" where you are just going to move around some data channel packets... but at that point you can (and I argue should) just use libwebrtc (I do this, and I helped one of my friends do this for his product in a weekend: people act like it is hard to compile but it really isn't).

(Even more so: the Lua script you linked to looks more like a demo/example of a way to use the Lua plugin to get some functionality vaguely similar to the VideoRoom plugin, and it is notably ridiculously long and contains a lot of codec-specific knowledge, while not having anywhere near the actual functionality of the actual real C VideoRoom plugin. It is as if Janus is just a super low-level WebRTC library in the form of a framework, with an explicitly monolithic plugin doing everything.)


It sounds like you have a use-case that you should just write your own plugin for. The out-of-the-box video room plugins aren't suitable.

> the vast majority of the VideoRoom functionality is written in C

It's still just a plugin that hooks the same callbacks and implements the same interfaces as any of the rest of them. Feel free to implement your own.


At that point why wouldn't I just use libwebrtc? The reason to get an off-the-shelf SFU is because all the hard work is in handling all the codec-specific workarounds, being able to handle keyframe request sharing, responding to RTCP bandwidth feedback to do SVC layer switching, and now doing all of this while most of the state is encrypted due to insertable streams... this is all hard stuff that people keep learning more about and for which the state of the art is a moving target due to browser changes.

I would expect 100% of applications doing anything at all with video to want all of that functionality, but only some small number to have a "room" concept that maps to the idea of the specific schema imposed by the VideoRoom plugin. It is thereby strange that all of that general video functionality is commingled together in a 8k line C file with all of the high-level room abstraction... the answer with Janus is always "write your own plugin", but either you are doing something so trivial that Janus doesn't seem to be doing anything but the lowest level WebRTC layer, or, as far as I can tell, you have to fork the VideoRoom plugin and then hope you can merge changes from upstream back into your plugin.

Am I wrong here? Like, I would love to find out I am wrong here ;P. (Which is why I was asking the OP about if their "custom solution" was a fork of VideoRoom: to see if they told me something I don't know.) But when I skim through that C file (or even the Lua file! though that demo very notably seems "incomplete" vs. the "real" C copy) I see tons of code referencing all of the codec-specific negotiation and stuff that I would explicitly be using Janus to get, so I can't not use or fork the VideoRoom plugin without losing the purpose of the platform as I would be reimplementing all of the hard parts myself (again, unless you are doing something so trivial--broadly speaking, something that doesn't involve video--that you frankly should be using libwebrtc or one of its various alternatives, such as Pion).


Not sure what you were expecting: at the very foundation of WebRTC is SDP, which implies negotiation, and with endpoints supporting potentially different codecs, negotiation is very much important whether you like it or not. That's why the VideoRoom plugin does need to take that into account. I won't get into the discussion of how complex a fork is to maintain: I always hope people contribute back what they add (assuming it's generic enough to fit the project and not customer-specific), rather than keeping it to themselves.

That said, the vast majority of people don't really need to write their own plugin, or even customizing existing ones. What we foster a lot is leveraging existing plugins as much as possible, maybe combining them at an application level, and not reinvent the wheel, and it seems to work for most (it certainly does for us, for our own applications).

On the Lua demo, it is indeed a bit more limited than the C counterpart (we clearly didn't invest as much time on it), but I'd disagree on the "incomplete" part. All the relevant parts are there, and most importantly, it's supposed to be much easier to extend and modify than the C version. There's at least one big company we're aware of that's using it in production and is very happy with it.


> Not sure what you were expecting: at the very foundation of WebRTC is SDP, which implies negotiation, and with endpoints supporting potentially different codecs, negotiation is very much important whether you like it or not.

I would expect the logic for negotiating streams to be an unrelated layer of abstraction to the concept of room management? That the code and work 100% of video apps want--SVC, end to end encryption, negotiation complexity--is mixed up in the same giant C file as a monolithic plugin with the code for JSON configuration files of a "rooms" abstraction that is a hardcoded notion of a single narrow vision of a multiparty video chat server is really awkward, and means that at best every single application ends up either as a messy fork or with a thick middleware adapter that attempts to translate between these concepts.

It is like wanting a pub-sub solution to build your own chat system but being handed a full IRC server as your building block, where you either need to fork the system to rework the notion of "channel" and the various user mode flags to match how you want to do chat--and then hope you can easily still rebase your work to the latest codebase, as the implementation of basic things like "send a message and have other people receive it" is mixed together with the notion of "a half-operator is someone who can kick users but not change the list of operators"--or build some thick middleware adapter layer that is simulating a simpler pub-sub system on top of degenerate channels.

If the code for "rooms" was a different layer of abstraction from the code for "WebRTC VP9 SVC signaling", it would allow me to just build the parts I want on top--so I can get the semantics of a public government hearing, which is different from a business meeting or a webinar or a "house party" without figuring out how I am going to translate my concept onto the existing meeting semantics of the VideoRoom plugin--or at least if the code for this was cleanly placed into a separate C file then I would be much happier with this idea that I am supposed to "extend and modify" the codebase to implement my own semantics, as I wouldn't be so worried that one day I am going to get a merge conflict on this 8k line file full of C code I am hacking on :(.


Then I think what you can use are the Lua or Duktape plugins, which were indeed written to allow people to write their own logic without having to worry about C or forks: even if the C code of the plugins is updated, your code is in a script that is loaded dynamically and is external to them.

If you forget about the videoroom.lua code and do something from scratch, you're free to handle the logic however you want: handling media is as simple as saying "send incoming media from A to B and C", and media-wise that's all you need to do in the script itself to have the C portion do the heavy lifting for you. You still need to take care of SDP and signalling, but you can do that on your own terms. I still have a plan to implement yet another plugin that delegates the logic to a remote node using something RPC-based, but unfortunately I didn't have time for that yet.


If you want low-level video routing functionality without the added layer of room-like logic, i.e. something that handles WebRTC and then tells you "now, here is your incoming video flow, do whatever you want with it", you might want to check out Kurento.

However our WebRTC stack has the minimum of congestion control features (plain REMB, no simulcast), and it doesn't implement SVC or newer toys like insertable streams, nor does it completely abstract you from the grunt work that WebRTC leaves up to the user (like signaling, setting up a TURN server, or having a minimum of understanding about ICE in order to troubleshoot when problems arise).


Yeah: the whole reason I want to use someone's off the shelf solution is because I want to have all the new hard stuff like SVC and insertable streams both done for me and maintained by someone other than me ;P.


Like I said, Janus doesn't do anything by default except for some core RTC stuff and push packets around. You have to implement your own use-case via a plugin. It sounds like the Video Room plugin that's implemented in C is not exactly what you want. You can write your own plugin (maybe based on it) yourself.

If you want to implement E2E via insertable streams then you can start right here:

https://github.com/meetecho/janus-gateway/blob/master/plugin...


We write our own services to coordinate janus rooms over multiple instances. That service takes care of translating from our own domain specific entities to janus room. It's also responsible for sharding rooms to healthy and available Janus instances. Instead of treating janus rooms as thick/permanent, we prefer to treat them like "just-in-time", rooms get destroyed when not used and get provisioned when requesetd, making it very flexible.

We have made some modifications to the Janus video room plugin but only to fix bugs or provide specific functionality we needed for certain gaming use-cases. Out of the box, the plugin is already very featureful. The key is to always stay up to date with master, the project is fairly active and so keeping up to date is very important.

We have debated writing our own plugin and that's certainly a possibility in the future. (There is a Duktape JS layer available) To be completely transparent, if we were to reach that need, I would re-evaluate the solution with more recent alternatives, like pion.ly.


I recommend NOT implementing plugins in JavaScript with the Ducktape JS layer. I strongly recommend writing plugins in Lua for maximal compatibility and performance.


Why's that? We wrote it to be functionally identical to the Lua plugin (the code base is the same), so engine and language apart they should behave pretty much the same way. Is there any known issue or limitation you're aware of?


What turn servers do you use?


Answering for myself, not for bbeausej. We are currently using coturn. It is quite easy to setup, you have to dig a little into the configuration parameter and that's it - I would recommend it.

Oh well, you probably should use the rest api for generating credentials on the fly and think about scaling (depending on your needs).


What percentage of sessions do you see going through turn?


We use coturn as the TURN solution and talk to the Janus API via websockets (we found it much better then REST if the msg rate is high)


We started our journey with Janus about three months ago and I can just fully recommend it. It is an amazing well-written piece of software which is just as flexible and integrative as developers wish. E.g. Slack used Janus, at least in 2016 [0]. It is important to understand: Janus offers the ingredients for building great WebRTC applications (examples [1]), whereas Jitsi is more like a ready-to-go solution and got much more attentation as Janus did.

Lorenzo and his colleagues are doing a really great job.

In the space of SFU/MFU, one really needs to decide beforehand what kind of solution is suitable for which requirement. I have chosen Janus because we could integrate it by 100% in our software. For example, I was also looking into Jitsi. But compared to Janus it feeled so much more complicated and not suited for that specific job.

However, it is important to point out, that this is no a ready-to-go solution. There is a long list of things you will have to dig into:

- ICE (a way to connect if you switch between WIFI and LAN or to punch a hole into your fw) [2]

- Cross-browser compatibility (Thank you iOS [4])

- TURN/STUN (Which matrix of udp/tcp and ports is needed for Hole Punching?), I recommend coturn.

- Scalability: How many clients are planned? In my experience, CPU and bandwith are bottlenecks, we went with horizontal scaling

- How do you gonna test your WebRTC application? So far great results with https://testrtc.com, but you probably also could accomplish a lot with Selenium.

- Simulcast/Bitrate or Unified Plan (Use available bandwith and adapt on-the-fly) [3][5]

But once you got it running, it is an amazing feeling. We are in 2020 and it is possible for an SMB to offer video conferencing to customers via a web-browser using your own infrastructure while being compliant to GPDR and other stuff.

[0] https://webrtchacks.com/slack-webrtc-slacking/

[1] https://janus.conf.meetecho.com/demos.html

[2] https://webrtcglossary.com/ice/

[3] https://webrtcbydralex.com/index.php/2018/03/14/extending-ja...

[4] https://webrtchacks.com/guide-to-safari-webrtc/

[5] https://www.callstats.io/blog/what-is-unified-plan-and-how-w...


I checked out the project on github, and I was a little surprised to see no mention of jsmpeg.com as over on that project the number one question and issue is the lack of a quality web socket server. Long story short, anyone try this or think it would fit with jsmpeg?


Janus only supports WebRTC, and WebRTC doesn't support MPEG out of the box (to be more precise, it's not in the codecs list any endpoint supports). Besides, streaming happens over WebRTC, not WebSockets: we only use WS as one of the alternative "transport" protocols for the Janus API, so just signalling.


We use Janus for a few hundred clients all spread out over a few instances. It works great for us. We tried kurento and it was just soooo hard and complex to setup. The Janus JS client library is pretty good and makes it really really easy


How hard is to use Janus for a media server in the SFU paradigm? Or where could I find a good example?

We currently use Kurento for the easiness of usage + the Java client, but we have been wondering about other solutions like Janus.


You might check out Mozilla's SFU plugin [1] for Janus that powers Mozilla Hubs [2]

[1] https://github.com/mozilla/janus-plugin-sfu

[2] https://hubs.mozilla.com/#/


Have you looked at mediasoup?


Why use C over Rust or Golang? It sounds like a security disaster waiting to happen?


It's not like Rust or Golang will magically save you either. I've been programming for a long time, C is my main language, so that's what I started using at the time and what I'm using still today. We worked a lot on performance and stability, so hopefully it won't suck nevertheless ;-)


Yeah please, let the author know your preferred programming language so he'll rewrite the whole software.


I wasn’t aware that the software had a history behind it, my bad.


Does anyone have a link which explains WebRTC technology in a concise way?


I really want to write a book on this eventually, but I haven't found a good single resource myself :( I have done some talks trying to explain WebRTC. Happy to answer any questions

* Slides - https://pion.github.io/talks/2018-11-28-seattle-video-tech.h...

* Video - https://www.youtube.com/watch?v=FdgoOrJH8ok&feature=youtu.be...

* Video - https://www.youtube.com/watch?v=ezZYd5NsxE4

------

But when I teach others I try to break it down into a few unique chunks.

* ICE - Establishing P2P Communication

* DTLS/SRTP - Securing Communication

* SCTP - Sending binary data over UDP and handling loss

* RTP/RTCP - Sending media data over UDP and handling loss


WebRTC is the good old RTP, on steroids.

It standardizes a set of conventions that web browsers could follow to a) solve the issue of NAT traversal (i.e. opening ports in consumer routers and firewalls), b) send video and audio (and possibly also arbitrary data), directly between web browsers, and c) possibly adapt the video quality to the conditions of the network, in an autonomous and automatic way.

That's the bird's-eye view of it.

Some keywords to expand on this: "a)" is done with the ICE protocol, "b)" is done with plain old SRTP, and "c)" is done with algorithms called REMB or Transport-CC.



We changed the URL from https://github.com/meetecho/janus-gateway to the project homepage, which is generally preferred when a project is being discussed for the first time on HN. Especially if it also links to its GitHub page or other repository.

I found previous submissions, but no comments except https://news.ycombinator.com/item?id=22610510.


Well, at least the GitHub page stays up under the traffic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: