Monthly Archives: September 2018

From A Reader: AmRRON T-REX 18 AAR

The AmRRON program takes the best of Emcom adding the ideals of Freecom organizing it into an effective tool.

Worth studying and having a good heart-search about what you and your station really offers for effectiveness when the chips are down.

73

Steve
K9ZW

brushbeater

AmRRON (American Redoubt Radio Operator’s Network) holds an annual grid-down exercise, having its members relay traffic and reports for a disaster scenario. In my view its what the ARRL’s Field Day actually should be, but that’s another story. Mauser sends the following:

First, I'm not sure how familiar you are with AMRRON, so a quick run down of their SOI is as follows: Nationwide voice and digital traffic nets on 30min windows, every 6 hours upon activation. This is followed by a rolling regional net every 6 hours consisting of 30 min voice followed by 30 min digital comms windows, which is supposed to be followed by 2m local voice and digital nets and dissemination of relevant traffic to the unlicensed public by the Channel 3 project (FRS/GMRS, CB, MURS) and the Black Echo project (low power FM broadcast). I really don't think there is anything else like AMRRON in…

View original post 1,265 more words

Advertisements

Maestro Versions – where’s the power button?

I’m often asked what is the external differences between current Maestro production and the earlier series?

Making the switch over between versions less clear cut are several field reports of Maestro units produced in the transition from what I’ve heard references as Maestro-A (the Early Production units) and Maestro-B (the current production units has of 2018).

Easiest external indicator is the location of the On/Off switch.

Maestro-A (Early Production) has the switch on the top, to the far left:

Early Production Maestro-A Top On/Off Switch Location

Maestro-B (Later Production) has the switch on the Left Side of the unit:

Early Production Maestro-B Left Side On/Off Switch Location

Internal differences haven’t been spelled out more than the improved resolution of the Maestro-B screen and assurances that the Dell OEM module of the Maestro-A was replaced by a better new OEM component.  (The original Dell module had product nuances that Dell may have considered “features” in their own use as a tablet, but were not the way FlexRadio users (or FlexRadio itself) expected the module to perform when used as an OEM component in a complex product.)

73

Steve
K9ZW

Thoughts on Remote Station dependencies in Emergency Use

Whether Emcomm, Freecomm, or just for personal use, can a radio amateur depend on a remote station when the “chips are down?”

There are some compelling reasons that a remote station would be a useful tool in an emergency.  Whether it is to access a station unaffected by a localized emergency event, whether it is to gain a high performance remote station’s capabilities while being relatively mobile, through concepts like minimizing personal risk through DF (direction finding) & retaliation in a confrontational emergency – there are dozens of rationales making access to a remote station a consideration. Personally I find the idea a single operator might be able to access any one of several remote stations compelling.

By definition remoting a station requires the operator to establish a link from their location to the remote station.  While there are several types of connections available the contemporary remote station depends largely on internet connectivity to create the “bridge” between the operator and the remote station.

I’d like to talk about this ‘internet bridge’ in general reliability terms.

Robustness, Reliability and Latency are the keywords to define what works best.  Most Robust, Most Reliable and Lowest Stable Latency are the goals we need for an effective remote operation.

All current solutions depend on our operator to remote station bridge traversing multiple internet connections.

Often the most Robust, Reliable and Lowest Latency is technically complex and involved.

Largely solutions fall into a couple classes:

  • Direct Login – where the operator directly does a login to the remote station.  These are fairly simple, but often have data throughput issues and often require dedicated hardware at both ends.  The setups may be technically more challenging than any challenge in actual use.  (Geeky to configure, but easier to run later.)

 

  • VPN Tunnel – where an internet tunnel is created between the operator and the remote station.  These are more complex to setup, often requiring special software/hardware, but largely are workable.  A lot of folk find this solution more Geeky than they are ready to undertake.

 

  • Brokered Connections – basically allows the ease of an Direct Login brokering behind the scenes the advantages of a VPN Tunnel.  Basically the operator (and radios at the remote station) end up all connected to a server service that then gets the operator and remote station pushed off to their own VPN.  When they want to renegotiate a new connection the server service is called back in to handle those new negotiations.  Actual traffic doesn’t pass through the server service (that would be wasteful and add too much latency) but the service provides some levels of overwatch.  FlexRadio’s SmartLink is the widest known amateur radio Broker Connection product.

Some solutions require the remote station to have a computer interfacing towards the wider internet, others allow station components to interface directly to the internet.

The remote station interface computer can range from a separate PC class machine to a dedicated processor integral in perhaps a router (thinking VPN here folks) or even a board-type computer like a dedicated Raspberry-Pi acting as the interface.

If you visualize this remote station to operator ‘bridge’ from end to end, many components are single, dedicated, unique to that ‘bridge.’  These are often called “single points of failure,” meaning that if they fail the entire system will fail.  You can do a lot of research on “single points of failure” and suggest searching on it (you might want to use the “SPOF” shorthand and “single point of failure mitigation” to get a start on analysis/solutions.)

There is another consideration concerning the Robustness of parts of the ‘bridge.’  While we try to build our remote station ‘bridge’ using the most robust components we normally frame the expected reliability under the concept that whole system is only as good as it’s weakest link.

Actually the whole system – that ‘bridge’ – isn’t even as good as the weakest link.  Reliability Engineers multiply each uptime percentage with all the other SPOF reliability factors to get an overall system reliability prediction (see Lusser’s Law.)  This means we shouldn’t consider a ‘bridge’ 95% reliable that crosses say seven different 95% reliable SPOFs, rather we should consider that ‘bridge’ only roughly 70% reliable (the product of the seven 95% rounded off).

Actually this sort of math is fairly tedious and may only offer a reliability indicator in the end for our purposes, as we seldom have actual measured individual reliability factors.  One certainly wouldn’t want to build a reliability prediction based on marketing claims – that is why we intuitively put more stock into the real world experiences we can get information on.

A lot of Emcomm/Freecom station address the know SPOFs that face an emergency station – redundant gear, radios, power, manual paperwork/procedures to replace the automated ones, repair supplies & tools, and maybe even a cached complete redundant station in a different location in case the main station was damaged.

Things quickly get complicated when we remote though.  It is a lot harder to say swap in a good antenna switch when lightening damaged our usual switch when we are operating remote.  Like it isn’t likely to happen without feet on the ground at the station itself.

Then can we depend on traversing the WWW Internet to complete our ‘bridge?’

The impetuous to write this article has been a rare outage in Microsoft’s Azure, the backend product behind FlexRadio’s SmartLink.  SmartLink became unusable for part of day when lightening created a power surge that damaged the cooling in a major Microsoft Azure datacenter.  The loss of cooling led to the servers protecting themselves and going offline.  While established SmartLink ‘bridges’ appeared unaffected, there was a loss in SmartLink’s brokering new connections.  Establishing new remote connections via SmartLink wasn’t possible.

That brief outage led to a lot of thinking about whether a remote station is a good Emcomm/Freecom solution?

In my case I do keep a SoftEther VPN backup in the ready.

That is a Parallel alternative to the brokered SmartLink connection.

Parallel systems improve overall reliability with every completely separate parallel system available.

Mathematically say we have three 95% options we can calculate the overall reliability using the formula

Overall Reliability = 1-(first system’s failure rate x next system’s failure rate….)

That would give us in our example

Overall Reliability = 1-(5% x 5% x 5%) which equals 99.9875% calculated Overall Reliability

(If you think you’d like to get into more on this subject, including guidelines on how to calculate combined series/parallel system reliabilities I can suggest http://reliawiki.org/index.php/RBDs_and_Analytical_System_Reliability for a starting point.)

The math should guide us – if we have truly parallel redundancy we minimize the SPOFs we can control.

The remaining wildcard is how reliable we can consider the WWW Internet in an Emcomm/Freecom situation?

Whether the internet is interrupted by the emergency event or is disrupted separately, can we depend it to allow our proposed Emcomm/Freecom remote operations?

Recently in the amateur radio news the MARS folks have announced they want their people to both have the capabilities to operate and drill without internet connectivity.  As a great many MARS stations use a computer, they have asked that this computer be ‘air gapped’ – meaning physically disconnected from the internet.

I’m thinking it would be best practice that any Emcomm/Freecom remote station also have a parallel system to ‘bridge’ between operator and the remote station that is also fully ‘air gapped.’

Otherwise my take is we are just fooling ourselves as the greatest part of the ‘bridge’ is across systems & hardware we neither control or can access.  In most cases we may note be able to even figure out exactly what the ‘bridge’ topography actually is.

If that ‘bridge’ topography is altered to bypass damaged components (or for other reasons) it may pick up an unacceptable latency compromising our ability to operate remote.

In a future post I’ll cover ideas on possibilities for an “air gapped bridge.”

73

Steve
K9ZW

Advertisements