Command Timed Out

Cannot arm/disarm remotely via iPad app, iPhone app.

Update: also tried Alarm.com website. Same result.

“Panel Command Timed Out” repeatedly.

Looks like the commands this is referring to all eventually were acknowledged. Looks like there was a delay of about 20 minutes in this case. Have you noticed this previously or is this a one-off occurrence?

Signal Strength on the account and registration time both look fine, no general signalling issue, which typically means this was a carrier delay. Unless it happens frequently, this type of error resolves itself. I’m getting pretty speedy responses from status commands being sent out right now.

Just to be safe I’ve sent over a reboot command so the module gets a fresh connection. Let us know if you see this occurring again.

It has happened before, reported it at least one time previously.

Beyond the concern about any particular incident is a concern about how frequently this is happening and I just don’t notice this as aim not arming/disarming all day long.

How can we audit the reliability of the service?

Inbound command delays can occur, they are rare and usually either geographically based on a certain tower, or caused by a local system issue.

One common local cause of inbound delay is a cellular network extender. If you use a network extender for the same carrier as your module, it will detrimentally affect the inbound commands. In the case of Verizon, the module’s “phone number” can be typically added to the extender’s white-list, which will resolve the issue. If you or a close neighbor use a network extender, this may be the problem.

Another issue which can occur with 2GIG Panels relates to Z-wave commands getting backed up. Firmware 1.17 allows for a Z-wave command queue which helps resolve an overload of Z-wave commands adversely affecting the module.

Other common physical issues which can cause signal trouble are high voltage interference if the module antenna is in the wall resting near a house voltage line. Perhaps the most common issue comes from older 2G antennae being used for CDMA and HSPA modules. This should not be done. If you are using the smaller, roughly 1/2 inch x 2 inch antenna that was used with a 2G module, you’ll typically notice a lot of issues.

Now, if no local issue fits, the most likely cause is going to be the carrier’s tower. We can see when commands are sent and when commands receive an acknowledgement from the panel. Outbound signals are extraordinarily rarely affected unless signal strength is very very low. Inbound commands need to be router to the module, and if your panel is switching tower connections often, this will delay inbound commands. Outbound commands always have a route, inbound, the route changes. (Similar in nature to the network extender problem.)

Can you provide us with a rough frequency of when you notice this issue? I do not see other reports on this account or on the forum. There is no way to audit inbound command delays if no inbound commands are sent and being delayed, but in general I’m not seeing any issue on modem pings from what I can see in history.

If you do not believe any local issue fits, there are a few options.

We can request that ADC pushes this to the carrier for review. Often with a few reports in a geographical area the carrier is able to determine the issue and resolve.

Another option, if all things are equal with the coverage check tool, is to try a different carrier.

The best option, which eliminates natural cellular latency as well as any possible carrier related inbound delay, is to use the Go!Bridge, which gives you concurrent IP communication backup. Commands are sent via both methods, and inbound signals are usually near instant.

As a follow up we discussed this case with Alarm.com and they dug into history and pretty much all other signals except the ones referred to here within the past month that we could find appear to not have any delay, so it does look like in this case it was an errant inbound delay. That said, please do let us know if it continues. We do not see any delay now, the system is responding to inbound commands very quickly.

Really appreciate the detail, think we can readily and safely rule out any persistent local issues.

Question remains, how can we audit the reliability of the service? Would be useful.

Thanks,
.//A.

Can you provide us with a rough frequency of when you notice this issue? I do not see other reports on this account or on the forum. There is no way to audit inbound command delays if no inbound commands are sent and being delayed, but in general I’m not seeing any issue on modem pings from what I can see in history.

See above. Perhaps to clarify the question: are you looking for a way to test yourself? Sending commands to test is the best and really only way. If you can recreate the issue, and you notice a delay in processing a command, try running a cell phone test at the panel. Does this immediately resolve it and cause any hung commands to process?

Most rules are saved at the panel and processed locally for automation, with very few “scheduled” commands coming from a remote source, aside from Arming Schedules.

Thx for the note Warren. I affirmatively reported this once before in Dec of 2015, and have noted it several times since. Never lasts long, but happens nonetheless. And yesterday, mid-morning, a disarm command came through unbidden (must have been from previous evening).

*My question is not about a particular incident or the tactical steps to try and recreate/solve it. It is regarding monitoring/auditing a service that is paid for on a subscription basis and meant to have high availability. I can do this with websites, payment systems, cable, home automation, routers, etc.

Is there no way to actually audit the reliability and uptime of this alarm provider? I ask not as a criticism but as a user. The odds of me having this experience during the extremely rare times they are having issues seems astronomical. So my somewhat educated bet is that these delays in communication must happen fairly regularly and I (& others) just don’t see it as we are only interacting remotely with the panel irregularly (at best). OR, I am just remarkably lucky… ???

Hopefully that helps in providing context and clarity around what I am asking.

.//A.

Is there no way to actually audit the reliability and uptime of this alarm provider?

To clarify: yes, auditing the reliability and uptime is easily accomplished through Alarm.com on the dealer end. There are numerous tools to assist with this. The pings we described are one part, and automatic, giving us, and you through loss of communication notifications, a solid knowledge of when communication is disrupted.

Uptime is logged on all accounts. Panel communication “uptime” on this account, for example, is 100%. This is across a 30 day time-frame and indicates the panel was never seen to lose communication.

For finite local measurement you can adjust Q23 to an extremely low value to monitor the panel unable to communicate with a cell network. If you are concerned about intermittent loss, this will be the best tool to log all periods of comm loss at the panel.

However, the point in this case is that inbound-delays are typically outside of this scope and represent a different issue. A few different problems can present the same symptom: namely a delay on inbound commands, but no communication loss or affect on outbound signals. We can clearly see the symptoms if they exist, but to determine the cause it needs to be tested. Checking for uptime in this case doesn’t tell us anything concrete. Checking for percentage of commands delayed doesn’t really tell much either, although if it occurs often it is easier to escalate.

By experience, just to give a general idea, since I have worked on these systems, of the delays we’ve seen, more than half have been local problems. We’ve seen Verizon exhibit the delay issue much more often than AT&T, so with a Verizon module it is probable that it could be a carrier concern.

Alarm.com collects communication statistics and works with carriers on a constant basis, but in the cases where one account shows delays but no nearby ones do, it requires further testing to push to the carrier for resolution.

So, all that said, when you do notice this issue and a command does not go through within 30-60 seconds, try a cell phone test at the panel. Does this cell phone test run successfully? And does it immediately allow all commands to process?

Ok, thanks. Is there a report that can show me all of the inbound commands to my panel and response times over X period of time?

Again, I am not trying to tactically diagnose an issue at this point, I am trying to audit the reliability of the remote management of the service I have subscribed to, with which I have experienced inexplicable and seemingly intermittent delays. Maybe there is an issue, maybe there isn’t, maybe I have bad luck. Starts with auditing.

Thx,
.//A.

Again, I am not trying to tactically diagnose an issue at this point, I am trying to audit the reliability of the remote management of the service I have subscribed to, with which I have experienced inexplicable and seemingly intermittent delays.

Certainly. There is no way for a user to pull a report like you are suggesting directly, no, but yes your dealer (suretyDIY) can request custom reports, or utilize filtering for the data you are asking.

For reference, on the account from 1/1/2017 - 3/1/2017 it looks like 252 commands requiring acknowledgement, automated and manual, including weather updates, arm, disarm, backup syncs, etc., and there are 3 instances of unnatural delay (anything over about 60 seconds, Alarm.com maintains 120 seconds as the expected max, but that is more applicable to older 2G modules). One was in response to an automated process. Two instances span 11 arm commands, 7 back to back on 2/27, and 4 back-to-back on 2/8. All arm commands received panel acknowledgement simultaneously after a period of time from the first command in each instance (we would expect to see this if the issue is inbound routing)

This provides a fairly good representation of total effect, but there is a margin of error when trying to determine frequency. Some days in history show 10-20 commands requiring acknowledgement. Some days show 1 or 2.

Note that only a few months of “command” history is kept searchable, due to the overall size of the data. This is why we ask for a description of perceived frequency as this account has been active long before that cutoff.

We’ve enabled advanced supervision on the back-end which more aggressively pings the system to determine intermittent loss of communication, but I do not anticipate that this will show any result given the discussed symptom and likely causes.

From above, going forward, there are a few directions to go to directly address the issue:

  1. We’ve escalated this case with the Alarm.com signaling team to get the cellular carrier involved.

  2. Switching modules to the alternate carrier, if the coverage check tool shows equal coverage, would resolve any carrier specific issue.

  3. Using a Go!Bridge would resolve/bypass any carrier or local physical issue with inbound cell commands by sending them via broadband simultaneously.

BTW, just to check off another possibility, where is the antenna located? Is it inserted in wall behind where the panel is mounted?

If it is instead curled up in the panel housing that can lead to intermittent issues. The antenna itself should be clear of the panel internals.

Another instance of long delays/no confirmation in communication from Central station to my panel. This time, I repeatedly tried to arm it remotely from 8:52-9:49am Friday April 28th. Pls advise.

in-wall extended antenna.

in-wall extended antenna.

Is that the 10’ ANT2X model?

Where is that antenna located in the building? Is it an interior wall, brick or stone building?

What I am seeing, which may give a possible indication of why you are seeing intermittent issues, is a borderline signal strength that dipped around the time of this report. The closest report to the time you mention the issue it looks like the panel showed a 7 signal strength. That is low enough to cause concern, but it jumped back to low double digits in the next report.

The reason I ask location and construction around the antenna is that other users in the nearby area on the same network show extremely high signal strength. I would expect the difference seen here to be the result of a local interruption of some sort.

I see a few other instances where it dropped a point or two, but that is to be expected.

If prior troubleshooting suggestions or local observations do not show any likely physical cause, I’d recommend the same as above:

One option would be to try the other 3G carrier. (You could also try LTE if you update to firmware 1.17)

The best option, which eliminates natural cellular latency, borderline signal strength concerns, as well as any possible carrier related inbound delay, is to use the Go!Bridge, which gives you concurrent IP communication backup. Commands are sent via both methods, and inbound signals are usually near instant.