Flock’s new CISO posted another blog post — his second, I believe.[1] The first was regarding the Bishop Fox audit, which was discussed here. His latest post is “Flock Safety Cybersecurity: How We Protect Customer & Community Data.” In it, he presents a cavalcade of falsehoods and omissions that could not have been better hallucinated if ChatGPT had written it, with some casual defamation tossed in for good measure.
The Timeline, According to Flock
The timeline begins with an “External Claim” in March 2025, where “an individual contacted Flock with security findings after acquiring a device through illegal, unauthorized means.” Presumably, this refers to Jon Gaines’ research. A year later, Flock has not fixed those issues.
What it has done is reflected in the rest of the timeline: it “disclosed and addressed low-severity vulnerabilities,” it “responded to” the research, and it “published a response debunking false claims that the company had been hacked.”[2] None of that fixes the issues that were disclosed to Flock in March.
The first of those actions, disclosure, happened in November, after Gaines published his report. Before November, Flock had not disclosed the issue. Not even to its customers. This is despite the requirements of the CJIS security policy, which require vendors to notify the government agency and the FBI.
The Iowa Department of Public Safety (a Flock customer and CSA for Iowa) confirmed it had received no notification from Flock. Other CSAs — the Florida Department of Law Enforcement and the Illinois State Police — did not respond to a Sunshine Act request, or asserted that vulnerability notifications are “ALPR data.”
Despite a contractual and legal obligation to provide this notification to its customers, Flock did not do so for eight months, and then only after its customers found out.
The timeline does not discuss a YouTube video before pivoting to a “second” one with “misleading claims about Flock PTZ cameras.” Flock’s timeline says it “addressed” those claims.
“Misleading Claims,” According to Flock
The “Readdressing Misleading Claims About Cybersecurity at Flock” is a lie. Not because its content is false — although it’s not exactly true — but because it doesn’t even do anything resembling addressing claims, like the section heading promises.
Can’t even trust a heading. Anyway …
I want to be crystal clear: vulnerabilities are a part of the development process of hardware and software. No company on the planet is infallible, nor is any company unhackable. It is an expected and normal process for vulnerabilities to be discovered and remediated at each stage of software development. From the point of a developer writing code all the way to that finished product running in production.
We engineer bridges and buildings so that they don’t collapse. We do all sorts of math and engineering and further science so this doesn’t happen. But occasionally, and unfortunately, they do collapse. When they collapse, we don’t shrug our shoulders and say “it’s part of the process.” We investigate the cause and address it. We make meaningful, articulable improvements to our engineering processes and standards.
We now have those collapses and their fixes codified in laws and regulations and we explain them in engineering textbooks and use them as examples on powerpoints at industry conferences and seminars. That is why we now have buildings and bridges that are more earthquake resistant than 100 years ago.
We don’t hide the problem. We don’t say it’s “an expected and normal process” for a bridge to collapse.
It’s an exceptional situation for a bridge to collapse, just as it’s an exceptional situation for a software vulnerability to be discovered in production. And just as people have died from buildings crumbling in earthquakes, people have died from insecure surveillance networks.
You don’t hide engineering issues — civil or software. You make them public, you address them, and you learn from them.
But, despite claiming that discovering these issues in production is “expected and normal,” Flock’s bulleted list of what to expect from a vendor does not include it.
The list does not mention notification or remediation for production issues. No timelines, no categories, no mentions of public vulnerability trackers, no issue categories, or anything else. Not even a “We will notify our customers and provide a remediation plan within 48 hours” or anything similar.
Flock’s Cybersecurity Team
Flock continues to invest in our team and has 10 new headcount positions slated for hiring this year, adding to our existing team of 20+ engineers. Cybersecurity is nothing without people.
This is the exact opposite of what Flock should be doing. Instead of hiring more engineers to develop more buggy AI-powered features and release more half-finished websites, Flock should be investing in hiring policy and security experts.
The post then lays out some team names without defining their headcounts, budgets, or positions in the organization hierarchy. In some companies, a 50-person “DevSecOps” team is focused on security and can shut down production when needed; in others, it’s literally one guy in Mexico City writing scripts so developers can automatically release code without review.
Castaldo does not even hint at where Flock might fall on that spectrum, and that’s cause for concern.
The “First” video
Although Castaldo omits the November video — which was Benn Jordan working with Jon Gaines — from the timeline, he devotes a section of the post to it.
In November 2025, a YouTuber released a YouTube video with two other individuals claiming to have “hacked 80,000 Flock cameras”. That statement tells you all you need to know about the credibility of the individuals and the video itself.
The video is titled “We Hacked Flock Safety Cameras in under 30 Seconds.” The closest thing to Castaldo’s quote is: “Upon further investigation, it turns out that there are over 80,000 of them. And um we got some and we hacked them.” Which is 100% true.
Blatantly misquoting an opponent’s statement before attacking it tells you all you need to know about the credibility of that individual.
Castaldo uses some choice words like “illicitly,” and “illegally” to characterize the acquisition of the Flock hardware. There is nothing “illegal” about buying hardware, and absolutely nothing suggests that Gaines (or whoever bought the hardware) did so illegally. Falsely accusing someone of criminal conduct is defamation per se in most jurisdictions.
Flock did not disclose these issues to customers. Flock did not notify customers in accordance with industry best practices and according to CJIS standards. Flock did not close out any CVEs, nor did it open any new ones. Flock did not tell Jon Gaines “we are aware of this and we will fix it.” And at no time in 2025 (or at all, for that matter) did Flock communicate a fix.
While the findings were legitimate, they were all of low severity. Meaning the risk to customers or customer data was near zero.
Of the findings in the report, many are high severity when going by the framework laid out by the U.S. Department of Justice, which governs much of the data. Castaldo does not specify what framework he uses for his “low severity” classification or his “near zero” risk assessment.
Dunwoody gave us vibes-based auditing and compliance, Castaldo adds another layer: vibe-based cybersecurity.
Had this individual not prevented [the camera] from connecting to our cloud, most of their findings would have been moot.
This is a fair enough statement in isolation, but does not address the two key problems.
First, there is no evidence that Flock discovered and fixed these issues, and rolled out an update. No required customer notifications, no proactive security disclosures, nothing. Complete silence.
If these issues were indeed fixed, and were not the result of plain negligence, nothing is lost by publishing these issues. Most software vendors do exactly that to build trust. Microsoft, for example, has a page called “Vulnerabilities and Exploits” on its main website, and it includes a list of fixes with each update, including any security fixes.
If Flock had published anything or notified anyone, cross-referencing those notifications against Jon Gaines’ report would make for an easy exercise in ticking off fixed issues and seeing what — if anything — remains.
Flock could easily restore trust and show that it is on top of its security by publishing a few emails that it already sent to its customers when it first discovered these issues — as it is required to do — or when it fixed the issues — as is standard practice.
Second, there have been no patches for this particular operating system since 2021. While security issues could have been deployed for Flock’s custom software, no vendor OS fixes were released.
Connecting it to the network would not have caused non-existent patches to be applied.
The “Second” video
This individual did not ethically submit any information to Flock prior to the release of their video
If I’m recalling the video correctly, it is true Jordan did not submit information to Flock prior to the release of the video. The last time issues were disclosed to Flock — in March, according to the timeline — they were not fixed or disclosed even months later (or, to this day, as far as I’m aware). Disclosure to a vendor is often the right choice, but there are no bright lines in ethics.
In this case, anyone whose ethics dictate minimization of harm would have done exactly what Jordan did. He denied Flock a second opportunity to jeopardize people’s safety by trying to bury an issue, as they did when issues were disclosed to them in March.
Just Keep Digging
Flock worked with our carrier partner to quickly resolve the network configuration issue. … Flock has also modified the diagnostic interface to require our technicians to log in with a username and password. Again, this interface is intended to be usable when a technician is physically present.
First, let’s address that the software had to be “modified” to require a username and password first.
According to Castaldo’s post, Flock did all of these things:
- “Threat modeling during the design phase of a product”
- “Scanning and fixing code as the developer is writing it”
- “Scanning and fixing finished code when a developer submits it to the code repository”
- “Scanning and fixing applications running in production”
- “Continuously scanning and monitoring the infrastructure the application is running in”
- “Conducting penetration tests against all of the above.”
To top it off, he writes immediately below that list: “There is a cliche about cybersecurity being an onion with many layers, and that remains accurate today.”
Yet, in that whole development process, nobody at Flock, at any time, said: “hey, maybe we should require a username and password.” Even hardcoding “DonkeyKeepOut!” as a password would have prevented Jordan from gaining access.
The second issue is that no matter what layers Flock might have in its development process, there was only one in its security: Verizon’s configuration. In this, Flock’s security model is more like banana: a single layer that can easily be peeled away by anyone who wants access.
Flock gave Verizon the unchecked, unreviewed, unsupervised, ability to create and manage the security configuration for an interface that was not secured with a password.
Even without a “misconfiguration,” Verizon employees would have had access. A company with roughly as many employees as Burbank, CA has residents (plus who knows how many contractors) having unfettered access to live videos of kids playing in parks is Castaldo’s baseline definition of secure.
On Android
The software on Flock’s cameras hasn’t received vendor security updates since 2021. That is the central fact of this section of Castaldo’s post, and the one he does not address. Instead, he offers several paragraphs of technically misleading context about chip architectures — context that, on examination, actually makes his position worse.
Flock hardware runs on a heavily modified version of the Android operating system maintained by Google. This is an open-source operating system, meaning anyone in the world can look at the code and use it.
Flock has “heavily modified” Android, but never published those modifications. Yet we should feel assured — presumably based on vibes — that its “heavy modifications” are not material enough to affect security.
This is very different from the CPU in a computer running Windows or MacOS. Qualcomm’s chipsets are purpose-built and support specific operating system versions.
This is somewhat backwards, because hardware vendors don’t tend to build chips to accommodate operating systems, but it’s accurate enough in the way it matters: there is a fixed relationship between the hardware and the OS.
Flock Falcons reportedly use Qualcomm Snapdragon 625 chips, which are early 64-bit ARM chips (like the M1/M2 chips in current Macs). These were supported by Android until version 8.0 or 8.1, support for which ended in 2021. This is the same as support for older Intel-based Macbooks, which is also ending. There is nothing particularly unique or different about Qualcomm chips in that regard.
It’s theoretically possible that for the past five years, Flock has been paying engineers to backport security fixes to this unsupported version of Android. There are projects like LineageOS that do exactly this to support aging phones in primarily low-income countries.
It’s also theoretically possible that Flock designed the Falcon around 2017 around the then-popular Snapdragon 625, and that it did not replace all of its devices in 2021 when supported ended, but instead designed an entirely new line of devices (which it called “Flock LPR”), with the goal of replacing the Snapdragon 625-based Falcons as they age out of service.
Qualcomm produces a custom, heavily modified version of Google Android that is designed to run on their chipsets.
Qualcomm does produce a modified Android that is optimized for its hardware, this much is true. The problem is that Qualcomm takes an official Google Android version and modifies it for its hardware.
Qualcomm released its last full BSP for the Snapdragon 625 in 2019, and its last security update in Q4 of 2020.
Android Things
Gaines’ security report finds a problem in “Android Things 8.1” being EOL. Android Things was a popular OS for the Snapdragon 625. In the blog post, Castaldo emphatically bolds that “Flock has never used Android Things, in any product.”
Never mind that it contradicts the earlier “all of the findings were previously discovered by Flock’s cybersecurity team,” or that this is the first time Flock has raised the point, the distinction between “Android Things 8.1” or “Android 8.1” is irrelevant.
Because “Qualcomm’s chipsets are purpose-built and support specific operating system versions,”[3] none of those “specific operating system versions” have been supported since 2021. Not Android 8.1, not Qualcomm’s BSPs, not Android Things 8.1.
Even if the statement were true — which I doubt, because I trust Gaines and Jordan to be able to identify an OS — it would be a nice “gotcha” on an entirely meaningless fact.
At the end of the day, the software hasn’t received security updates since 2021. That’s the point that matters, and the one Castaldo does not address.
Backporting
We will continue to backport any necessary security patches, as required under our agreements with all customers.
If Flock is indeed backporting security patches to Android (Things) 8.0 or 8.1, or whatever the case may be, then security itself may not be the issue. However, “as required under our agreements with all customers” includes the requirement to notify customers when they do discover security vulnerabilities.
Each time Flock backports a fix, its contracts — at least those with CJIS security addenda, which should be all government contracts — require notifications to be sent to contracting agencies (and the FBI). No notifications have ever been sent out.
The other problem is that Qualcomm’s proprietary modifications to Android, which Flock just explained are tied to the hardware, are not open source at all. There is no backporting fixes to those parts of the OS.
Third party attestation
Yes, Flock has qualified third-party attestations of its cybersecurity. What you should also expect from your vendors is continuous audits by qualified, third-party firms. Flock takes this seriously and goes far beyond surface-level audits.
The post rattles off a list of security standards or frameworks, this time omitting HECVAT and FERPA, and points to its “trust center” where, “[o]nce you gain authorization for access, you may review” the relevant documents.
But you don’t need access to see that the list of actual certifications — SOC2 Type II, ISO 27001, ISO 27017, etc. — are about organizational and procedural controls, not software vulnerabilities.
Flock “maintains standards” of “CJIS Insights”, “CJIS ACE”, “FedRAMP 20x,” and “NDAA”. “CJIS Insight” (singular — Flock can’t even get the product name right) is a compliance-tracking software dashboard sold by Diverse Computing, a company in Tallahassee, Florida. “CJIS ACE” is a commercial compliance assessment also sold by Diverse Computing. Neither is a government certification, and neither is affiliated with the DOJ or the FBI.
This is where it gets really interesting and where we have to break out our diamond pickaxes.
Castaldo spent most of this post assuring us that their use of an outdated operating system is fine because they backport software. Now he invokes CJIS and NDAA.
CJIS requires the use of FIPS-140 validated encryption modules. FedRAMP — which Flock also claims and which was codified into law by the NDAA — independently requires FIPS-140 validation as well. To the extent Flock has FIPS-140-2 validation, it has never produced documentation to my knowledge. Soon — in September 2026 — FIPS 140-2 will be no more. Flock will need to move to FIPS-140-3.
FIPS-140-3 places stricter standards on the “Operational Environment,” which includes the operating system: Flock will have to validate the combination of obsolete hardware (Snapdragon 625) and custom operating system as a single “hybrid module.” So far, such a hybrid module does not show up in NIST’s database.
As previously reported, Castaldo’s co-founder at “Security Tinkerers,” Will Lin, sits on the board of Bishop Fox — the firm Flock hired for its security audit. Castaldo mentions Bishop Fox only once in passing in this post, and does not mention this relationship at all in the section about third-party verification.
The Proof
I have called for this before, and I will call for it again: Flock should publish its actual NIST validation certificates, and its security disclosures to its customers.
Castaldo’s 2,000-word defense does not contain a single customer notification, a single CVE, or a single NIST certificate number. It relies on strawmen arguments, mischaracterizations of hardware lifecycles, and a little light defamation.
Stop digging and start fixing.
Not counting “Why I Joined Flock Safety: A Mission You Can Feel” ↩︎
This one probably refers to the December emails, where Flock had to tell cops that the information on this website is from public records, not hacks. ↩︎
The statement is incorrect, but the fixed relationship between chip and OS is real. How that relationship is created is irrelevant. ↩︎