Application Development, Zoom And Why Security Cannot Be Optional


What a month it has been! I am not talking about the world pandemic, this is a cybersecurity blog, I am talking about Zoom. The video conferencing app that was struggling in the market to get known to be the number one video conference software overnight. Zoom jumped from 10 million users to over 200 million in a few weeks.

Anyone that is thrust into the spotlight gets extra attention. More users mean more eyes, more experiences and on the dark side, more hacker sharks circling. Where the users are the hackers follow and 200 million is worth the extra efforts than 10 million.

Some hackers are more whitehat than blackhat and when vulnerabilities are found Zoom is notified about them and eventually the public is informed. Blackhat hackers don’t tell anyone about their findings, except maybe other hackers, and definitely not the software maker. This article is about the vulnerabilities that have been made public.

Zoom has had a rough year even before they were thrust to the top when it comes to security problems. Several of the problems are sloppy work, poor design decisions and every one of the issues, in my opinion, looks like the result of rushing the application development in an extreme Agile method with no deep consideration to the security considerations and requirements.

Here are a few of the top issues that have been discovered.

Leaks of email addresses

Zoom puts everyone with the same company email domain in the same “folder” so all the employees can see each other. That’s fine when you know it’s a company. However, for smaller ISPs Zoom saw those as a company and not as email providers. So they put all the strangers using the same email domain in the same folder and everyone could see other’s information. Sloppy, quick decision implementation without proper vetting for scenarios.

Zoom Bombing

Zoom’s implementation and use of the Zoom Meeting IDs is lacking proper security. Anyone that has the ID can join a meeting. Because the IDs are the same format hackers can script to find meeting IDs. With 200 million users your odds go up on finding valid ones. Zoom Bombing can be avoided but there is far too much on the users to protect the meeting rooms. Quick development to make is easy for the users made it easy for hackers and trolls too to exploit it.

Poor Encryption

Zoom touted ‘end-to-end encryption’ and it’s not clear if this is true end to end. Regardless, their choice of encryption methods was not to use the strongest available for mainstream apps which is AES-256 rather the older, weaker AES-128. In addition, the keys issued for their AES-128 implementation are issued from a Chinese company which raises too many security and privacy issues to count. Easier, sloppy, unvalidated by an external security review decision.

Zoom Software Can Be Easily Corrupted

Good applications have mechanisms built into it to prevent outside/3rd party software from tampering with or altering the main application. Zoom has these anti-tampering mechanisms in place, however, the anti-tampering mechanisms are not protected from being tampered with themselves. Basically that means a hacker can unload the Windows DLL which renders the anti-tampering null. The DLL is not pinned, so the attacker can inject what they want. Pre-existing malware on a machine could use Zoom’s own anti-tampering to help it tamper with Zoom.

Zoom Waiting Rooms

At the time of writing this, there is an advisory to all users to avoid using Zoom Waiting Rooms due to a serious security flaw with them. The details of what that security issue is have not been publicly disclosed until Zoom has the chance to respond, which is generally 90 days after a researcher notifies the company.

Here is a good report from Citizen Lab and their research into all of Zoom’s security and privacy flaws they found:


If you look at each of these items as single vulnerabilities or flaws you can accept a mistake. When looked at holistically you see a pattern of application development that does not have security as a critical path. Shortcuts were taken with control decisions. Easier implementations were put in place for speed rather than security. You can assume a lack or no thorough security testing was conducted that would have identified these flaws and shortcomings long before release. If there was testing the results were never remediated.

I will not defend Zoom in their decisions with their product however I do not isolate this situation to only them. Zoom is now under intense scrutiny because of its place in the populace. When you get to the top, you are ripped apart, examined and all your dirty little secrets get exposed. If you take a look at how the application was built you will see how they got here is not uncommon and can be reflected in many applications development teams. Case in point is the Internet of Things problem, rush to get products out the door and security is left behind because of the stigma it will ‘slow them down’. Applications and cloud services are not too different, especially those in start-up modes.

The rush to get product released, the race to get there first, the battle to get the newest features in the hands of the users before the other guys push the invisible requirements to the side. Security is one of the first to be ‘accepted as a risk’, put on the backlog, delayed until the next release and it then becomes a game of kicking the can down the road.

Now here we are. Zoom put out a product that got them to the top and their lack of focus on proper security and privacy controls will send them back to the bottom with users jumping off in droves. Google, NASA, SpaceX, many school districts and many more have banned Zoom’s use because of their security issues. As they should.

They are not banned because the product doesn’t work, rather the product is realized to be insecure and puts user’s data at risk. We are seeing the first real test of what a lack of security focus will do to a company.

After the backlash Zoom’s CEO scrambled and announced that Zoom will be freezing new features for 90 days to fix and improve their security posture. Had they integrated their security controls into the development cycle I can assure you that the would still have the product they built before, but wouldn’t have to stop everything for 90 days, 3 months, 1 fiscal quarter to correct past bad decisions.

Going forward the “Zoom Story” of secure application development will be used in conversations for years. As it should be. Real-world examples of how not to do something are valuable to get your point across.

Zoom isn’t alone in its feature-focused development. It’s common, and in my experience, I still see it too often with those in positions of decision making, only Zoom got called out on it and it might very well cost them vastly more than it would have had they done things right, to begin with. What the industry needs to do is embrace this situation and reflect upon their own product and development projects and ask themselves, “If we get caught up something like this can I afford a 3 month freeze?”

Be Aware, Be Safe.


Become a Patron!

Sign-Up: Free Security Training