Recently I’m getting interested to WebRTC, WebSocket and Server Side Events (SSE), they are all VERY interesting technologies but focused on different concepts and thought for different purposes.
The main WebRTC mission is to establish connections between clients (browsers), it’s basically an unreliable channel (by default it uses UTP instead of TCP) and a good usage example is a de-centralized communication between browsers (games and so on).
WebSocket is more “common” and is thought to establish a bi-directional connection between server and browser without user’s action. It’s ideal if you have to stream some data to client (back and forth) and you don’t want to use some other tricks like AJAX or similar. It’s basically reliable because it uses a TCP connection.
Server Side Events are a quite simple approach to push data from server to browser without client’s action. The main difference between SSE and WebSocket is the direction of communication, in WebSocket we have a fully bi-directional communication, while in SSE it’s mono-directional from server to client.
Which is the best technology? As always, it depends by what you need.
If you must be sure of reliability of connection you should avoid WebRTC (UTP can loose some packets .. by design).
If you are unsure to choose between WebSocket or SSE the thing to let you decide should be if you need a bi-directional communication or not.
Support in browsers
WebRTC support is coming, Firefox and Chrome support it but (as usual) IE is behind (http://caniuse.com/#feat=rtcpeerconnection)
WebSocket support is pretty good in all major browsers (Firefox, Chrome and even IE) (http://caniuse.com/#feat=websockets)
Server Side Events support is still coming, it’s supported by Firefox and Chrome but (again) IE is behind (http://www.w3schools.com/html/html5_serversentevents.asp)
Everyone knows the revolution that AJAX made for the web, now we are pushing the limit ahead with these technologies which will made the web an even more useful and feature-rich place!Comments
I’m often in SSH with some servers (obviously without a graphical interface) and sometimes I need to understand where your disk space is used (to provision or simply to check that everything is going well).
Usually I rely on monit to alert me if something strange is happening, for example I receive an email when used disk space goes over something (let’s say 70% of total disk space) but I never find a good and comfortable way to see where disk space were used.
You can use du but to do it on the whole filesystem is not really comfortable because you have to “clean” output with grep to get some decent results.
Looking for a solution to this problem some years ago I found ncdu and I fell in love with it.
Honestly I don’t use it very often because my servers are rarely disk-intensive but when I need to “inspect” a disk I always use it.
It was written by Yoran Heling as a fun project in C, it uses ncurses (so it makes very comfortable to use it over SSH) and it’s quite fast (to see results of some dozens of Gbs it takes just some seconds).
Setup is straightforward, you have just to download the .tr.gz archive, uncompress it and
./configure && make && sudo make install
Also the usage is very simple, you just have to execute
ncdu [dir which you want to inspect]
After analyzing filesystem ncdu allows you to browse it via keyboard giving you disk usage information for every directory, it’s very nice!
If you are looking for a way to have “the big picture” of your disk and you don’t have a graphical interface, ncdu is definitely THE answer!
Download it from hereComments
In those days I’m trying to find the best way to get data from systems and applications.
For systems: I don’t log very much (it’s bad I know), I usually setup a monitoring system (monit) which checks periodically all resources (cpu, ram, disk, etc) but I don’t take an history of these values. Something is available in Linode’s longview dashboard but honestly I never look at it.
For applications: usually I prefer to use an explicit approach, in my application’s code I create events (stored somewhere like mongodb or logfile) which are used to generate interesting (quite statistical) information.
This is a good approach in some situations but it doesn’t work every time, let’s imagine if you are not using mongodb in your app, you need to add it just to track events, is it worth?
Quite all applications write logs, so it should make more sense to write my events in application’s logfile and to parse it to get aggregated information I need.
Often information stored in rails’s log file are quite ignored, many people uses tools like NewRelic to get information about application’s performances without thinking that all those information are already sitting in your logfile, you need “only” to use them.
So, to fix this bad habit, I’m trying to store these information (and even more) in elasticsearch using logstash and interacting with data using kibana, this setup requires a bit of time to be done but it should give much more value to your logs.
How it works is quite simple, your application writes logs, logstash reads them, parses them and stores results into elasticsearch. When you want to see data you use kibana which fetches data from elasticsearch and gives you a nice output.
I’m still trying to find a setup I like for everything, and (more important) I’m still trying to understand how much resources logstash + elasticsearch+ kibana takes.
I will let you know my impressions when I will have them, in the while: take a look to your logs and find the best way to make them useful!Comments