Browsing articles in "Internet Security"
Dec 12, 2017
John
Comments Off on A state of constant uncertainty or uncertain constancy? Fast flux explained

A state of constant uncertainty or uncertain constancy? Fast flux explained

Last August, WireX made headlines. For one thing, it was dubbed the first-known DDoS botnet that used the Android platform. For another, it used a technique that—for those who have been around in the industry for quite a while now—rung familiar in the ears: fast flux.

In the context of cybersecurity, fast flux could refer to two things: one, a network similar to a P2P that hosts a botnet’s command and control (C&C) servers and proxy nodes; and two, a method of registering on a domain name system (DNS) that prevents the host server IP address from being identified. For this post, we’re focusing on the latter.

Malware creators are the first actors to use this tactic. And Storm, the infamous worm that boggled and exasperated Internet users and security researchers alike in 2007, is one of the first binaries that proved fast flux’s effectiveness in protecting its mothership from detection and exposure. Fast flux made it doubly difficult for the security community and law enforcement agencies to track down criminal activity and shut down operations. Eventually—and albeit gradually—Storm’s reign ended, mainly due to the ISP that hosted the worm’s master servers, Atrivo, going dark.

From then on, the actors behind fast flux campaigns have been varied: from phishers and bot herders to criminal gangs behind money mule recruitment sites. There are also those that use fast flux to engage in other unlawful schemes, such as hosting exploit sites, extreme or illegal adult content sites, carding sites, bogus online pharmacies, and web traps. Recently, fast flux has been gaining notoriety and usage among cybersquatters, which makes this another threat for businesses with an online presence.

Fast flux—what is it really?

Fast flux is, in a nutshell, an advanced game of hide and seek. Cybercriminals hide by assigning hundreds or thousands of IP addresses that are swapped with extreme frequency to a fully qualified domain name (FQDN)—let’s say www.uniquedomainname.org. This is done using a combination of (1) distributing the load received by the server across many geographical points acting as proxies or redirectors and (2) banking on a remarkably short time-to-live (TTL) data lifespan. This address swapping happens so fast that the whole architecture seems to be in flux.

Here’s a simple illustration: If criminals assign www.uniquedomain.org a set of IP addresses that change every 150 seconds, users who access www.uniquedomain.org are actually connecting to different infected machines every single time.

Fast flux is occasionally used as a standalone term; however, we also see it used as a descriptor to the nature of a network, botnet, or a malicious agent. As such, you’ll find the below terms used as well, and for clarity, we have listed their definitions:

  • fast-flux service network (FFSN): The Honeynet Project defines this as “a network of compromised computer systems with public DNS records that are constantly changing, in some cases every few minutes.” There are two known types of this: single-flux and double-flux.
  • fast-flux botnet: Refers to a botnet that uses fast flux techniques. Herders behind such a botnet are known to engage in hosting-as-a-service schemes wherein they rent out their networks to other criminals. Also, some fast-flux botnets have begun supporting SSL communication.
  • fast-flux agent: Depending on the context, this could refer to either (1) the malware responsible for infecting systems to add them to the fast-flux network or (2) the machine that belongs to a fast-flux network.

Fast flux shouldn’t be confused with domain flux, which involves the changing of the domain name, not the IP address. Both fluxing techniques have been used by cybercriminals.

Wait, so assigning different IP addresses to a single domain name is legal?

Although it’s generally the case that one domain name points to one IP address, this association isn’t a strict mapping. And that is a good thing! Otherwise, web admins wouldn’t be able to efficiently distribute incoming network traffic to multiple resources, wherein a single resource corresponds to a unique IP address. This is the basic concept behind load balancing, and popular websites use it all the time. And round-robin DNS—this one-domain-to-many-IP-address association—is just one of several load-balancing algorithms one can implement.

There’s nothing illicit about this. What criminals are doing is merely taking advantage of or abusing what network technology already has to offer.

Aside from Storm, what other malware has been associated with fast flux?

Threat campaigns that use malware associated with fast flux networks usually involve botnets. And in the earlier years, worms were the type that used fast-flux botnets. Storm is a worm binary; so is Stration, its rival. Nowadays, other malware strains have banked on fast flux’s efficacy. We have Kronos and ZeuS/Zbot, two known banking Trojans; Kelihos, a Trojan spammer and Bitcoin miner; Teslacrypt, a ransomware (their payment sites are found hosted on an FFSN in East Europe); and Asprox, a Trojan password stealer turned advance persistent threat (APT).

As a side note, fast flux networks are not only used to hide malicious activities. Akamai, a known cloud delivery platform, has revealed in a white paper [PDF] that a fast flux network was used in several web attacks, specifically SQL injection, web scraping, and credential abuse, against their own customers.


Read: Inside the Kronos malware—Part 1, Part 2


Can fast flux be detected/identified? If so, how?

Definitely. Some organizations and independent groups in the security industry have put a lot of effort into investigating, studying, and educating others on what fast flux is, how it works, and how it can be detected. Below are just a few references that you can visit, browse, and read more thoroughly:

Can users protect themselves from fast flux activity?

When it comes to keeping our computing devices safe from physical and online compromise—with data in them unaltered and secure—extra vigilance and good security hygiene can save folks from a lot of headaches in the future. Installing an anti-malware with URL blocking features on devices not only protects them from malware but also blocks sites that have been deemed malicious, consequently stopping the attack chain. Lastly, regularly update all security software you use.

Stay safe out there!

The post A state of constant uncertainty or uncertain constancy? Fast flux explained appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 11, 2017
John
Comments Off on A week in security (December 04 – December 10)

A week in security (December 04 – December 10)

Last week on the blog, we looked at a RIG EK malware campaign, explored how children are being tangled up in money mule antics, took a walk through the world of Blockchain, and gave a rundown of what’s involved when securing web applications. We also laid out the trials and tribulations of the Internet of Things, advised you to be on the lookout for an urgent TeamViewer update, tore down the disguise of new Mac malware HiddenLotus, sighed at the inevitability of a Napoleon-themed piece of ransomware, and unveiled our New Mafia report.

Other news

  • Bitcoin chaos as NiceHash is compromised and thousands of Bitcoins go wandering into the void, potentially to the tune of $62m. (source: Reddit)
  • How easy is it to make a children’s toy start swearing? This easy. (source: The Register)
  • Chrome 63 is now available and comes with multiple security improvements and additions. (source: Chrome updates website)
  • Phishers are slowly turning to HTTPs scam sites—but why? (source: PhishLabs)
  • The Andromeda Botnet is finally dismantled by law enforcement. (source: Help Net Security)
  • If you try to hack your friends out of jail, you may well end up joining them. (source: MLive)
  • Perfect email spoofs? Oh dear. (source: Wired)
  • Think you’ll be getting a ransom out of North Carolina, think again. (source: Chicago Tribune)

Stay safe, everyone!

The post A week in security (December 04 – December 10) appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 11, 2017
John
Comments Off on How cryptocurrency mining works: Bitcoin vs. Monero

How cryptocurrency mining works: Bitcoin vs. Monero

Ever wondered why websites that are mining in the background don’t mine for the immensely hot Bitcoin, but for Monero instead? We can explain that. As there are different types of cryptocurrencies, there are also different types of mining. After providing you with some background information about blockchain [1],[2] and cryptocurrency, we’ll explain how the mining aspect of Bitcoin works. And how others differ.

Proof-of-Work mining

Cryptocurrency miners are in a race to solve a mathematical puzzle, and the first one to solve it (and get it approved by the nodes) gets the reward. This method of mining is called the Proof-of-Work method. But what exactly is this mathematical puzzle? And what does the Proof-of-Work method involve? To explain this, we need to show you which stages are involved in the mining process:

  1. Verify if transactions are valid. Transactions contain the following information: source, amount, destination, and signature.
  2. Bundle the valid transactions in a block.
  3. Get the hash that was assigned to the previous block.
  4. Solve the Proof-of-Work problem (see below for details).

The Proof-of-Work problem is as follows: the miners look for a SHA 256 hash that has to match a certain format (target value). The hash will be based on:

  • The block number they are currently mining.
  • The content of the block, which in Bitcoin is the set of valid transactions that were not in any of the former blocks.
  • The hash of the previous block.
  • The nonce, which is the variable part of the puzzle. The miners try different nonces to find one that results in a hash under the target value.

So, based on the information gathered and provided, the miners race against each other to try and find a nonce that results in a hash that matches the prescribed format. The target value is designed so that the estimated time for someone to mine a block successfully is around 10 minutes (at the moment).

If you look at BlockExplorer.com, for example, you will notice that every BlockHash is 256 hexadecimal digits long and starts with 18 zeroes. For example the BlockHash for Block #497542 equals 00000000000000000088cece59872a04457d0b613fe1d119d9467062e57987f1. At the time of writing, this is the target—the value of the hash has to be so low that the first 18 digits are zeroes. So, basically, miners have some fixed input and start trying different nonces (which must be an integer), and then calculate whether the resulting hash is under the target value.

Monero

How is Monero different?

Browser mining and other methods of using your system’s resources for other people’s gain is usually done using other cryptocurrencies besides Bitcoin, and Monero is the most common one. In essence, Monero mining is not all that different from Bitcoin. It also uses the Proof-of-Work method. Yet, Monero is a popular cryptocurrency to those that mine behind the scenes, and we’ll explain why.

Anonymity

The most notable difference between Bitcoin and Monero mining is anonymity. Where you will hear people say that Bitcoins are anonymous, you should realize that this is not by design. If you look at a site like BlockExplorer, you can search for every block, transaction, and address. So if you have sent or received Bitcoin to or from an address, you can look at every transaction ever made to and from that address.

Therefore we call Bitcoin “pseudononymous.” This means you may or may not know the name of that person, but you can track every payment to and from his address if you want. There are ways to obfuscate your traffic, but they are difficult, costly, and time-consuming.

Monero however, has always-on privacy features applied to its transactions. When someone sends you Monero, you can’t tell who sent it to you. And when you send Monero to someone else, the recipient won’t know it was you unless you tell them. And because you don’t know their wallet address and you can’t backtrack their transactions, you can’t find out how “rich” they are.

list of transactions

                                                                                      Transactions inside a Bitcoin block are an open book.

Mining

Monero mining does not depend on heavily specialized, application-specific integrated circuits (ASICs), but can be done with any CPU or GPU. Without ASICs, it is almost pointless for an ordinary computer to participate in the mining process for Bitcoin. The Monero mining algorithm does not favor ASICs, because it was designed to attract more “little” nodes rather than rely on a few farms and mining pools.

There are more differences that lend themselves to Monero’s popularity among behind-the-scenes miners, like the adaptable block size, which means your transactions do not have to wait until they fit into a later block. The Bitcoin main-stream blockchain has a 1 MB block cap, where Monero blocks do not have a size limit. So Bitcoin transactions will sometimes have to wait longer, especially when the transaction fees are low.

The advantages of Monero over Bitcoin for threat actors or website owners are mainly that:

  • It’s untraceable.
  • It can make faster transactions (especially when they are small).
  • It can use “normal” computers effectively for mining

Links

For those of you looking for more information on the technical aspects of this subject, we recommend:

Bitcoin block hashing algorithm

The Blockchain Informer

Blockchain Info

How Bitcoin mining works

How does Monero privacy work

The post How cryptocurrency mining works: Bitcoin vs. Monero appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 8, 2017
John
Comments Off on Napoleon: a new version of Blind ransomware

Napoleon: a new version of Blind ransomware

The ransomware previously known as Blind has been spotted recently with a .napoleon extension and some additional changes. In this post, we’ll analyze the sample for its structure, behavior, and distribution method.

Analyzed samples

31126f48c7e8700a5d60c5222c8fd0c7 – Blind ransomware (the first variant), with .blind extension

9eb7b2140b21ddeddcbf4cdc9671dca1 – Variant with .kill extension

235b4fa8b8525f0a09e0c815dfc617d3.napoleon (main focus of this analysis)

//special thanks to @demonslay335  for sharing the older samples

Distribution method

So far we are not 100 percent sure about the distribution method of this new variant. However, looking at the features of the malware and judging from information from the victims, we suspect that the attackers spread it manually by dropping and deploying on the hacked machines (probably via IIS). This method of distribution is not popular or efficient, however we’ve encountered similar cases in the past, such as DMALocker or LeChiffre ransomware. Also, few months ago, hacked IIS servers were used as a vector to plant Monero miners. The common feature of samples dropped in this way is that they are not protected by any cryptor (because it’s not necessary for this distribution method).

Behavioral analysis

After the ransomware is deployed, it encrypts files one-by-one, adding its extension in the format [email].napoleon.

Looking at the content of the encrypted test files, we can see that the same plaintext gave different ciphertext. This always indicates that different key or initialization vectors were used for each file. (After examining the code, it turned out that the difference was in the initialization vector).

Visualizing the encrypted content helps us guess the algorithm with which the files were encrypted. In this case, we see no visible patterns, so this leads us to suspect an algorithm with some method of chaining cipher blocks. (The most commonly used is AES in CBC mode, or eventually in CFB mode). Below, you can see the visualization made with the help of the file2png script: On the left is a BMP file before encryption. And on the right, after encryption by Napoleon:

 

At the end of each file, we found a unique 384-long block of alphanumeric characters. They represent 192 bytes written in hexadecimal. Most probably this block is the encrypted initialization vector for the particular file):

The ransom note is in HTA format and looks like this:

It also contains a hexadecimal block, which is probably the victim’s key, encrypted with the attackers’ public key.

The GUI of Napoleon looks simplified in comparison to the Blind ransomware. However, the building blocks are the same:

It is common among ransomware authors to prepare a tor-base website that allows automatic processing for payments and better organizes communication with the victim. In this case, the attackers decided to use just an email—probably because they planned for the campaign to be small.

Among the files created by the Napoleon ransomware, we will no longer find the cache file (netcache64.sys) that in the previous editions allowed to recover the key without paying the ransom.

Below is the cache file dropped by the Blind ransomware (the predecessor of Napoleon):

Inside the code

The malware is written in C++. It is not packed by any cryptor.

The execution starts in the function WinMain:

The flow is pretty simple. First, the ransomware checks the privileges with which it runs. If it has sufficient privileges, it deletes shadow copies. Then, it closes processes related to databases—Oracle and SQL Server—so that they will not block access to the database files it wants to encrypt. Next, it goes through the disks and encrypts found files. At the end, it pops up the dropped ransom note in HTA format.

Comparing the code of Napoleon with the code of Blind, we see that not just the extension of encrypted files has has changed, but also many functions inside have been refactored.

Below is a fragment of the view from BinDiff: Napoleon vs Blind:

What is attacked?

First, the ransomware enumerates all the logical drives in the system and adds them into a target list. It attacks both fixed and remote drives ( type 3 -> DRIVE_FIXED  and 4 -> DRIVE_REMOTE):

This ransomware does not have any list of attacked extensions. It attacks all the files it can reach. It skips only the files that already have the extension indicating they are encrypted by Napoleon:

The email used in the extension is hardcoded in the ransomware’s code.

Encryption implementation

Just like the previous version, the cryptographic functions of Napoleon are implemented with the help of the statically-linked library Crypto++ (source).

Referenced strings pointing to Crypto++:

Inside, we found a hardcoded blob—the RSA public key of the attackers:

After conversion to a standardized format, such as PEM, we were able to read its parameters using openssl, confirming that it is a valid 2048 bit–long RSA key:

Public-Key: (2048 bit)
Modulus:
 00:96:c7:3f:aa:71:b1:e4:2c:2a:f3:22:0b:c2:88:
 8c:87:63:b3:fa:31:97:9b:48:1b:64:2a:14:b9:85:
 0a:2e:30:b2:22:c2:ee:fe:ce:de:db:b9:b7:68:3f:
 12:a6:b3:e1:2b:db:ac:90:ea:3e:0a:07:25:3d:19:
 f2:98:b3:b2:e3:1b:22:e6:0d:ad:d5:97:6f:57:cd:
 77:6c:68:16:49:db:7d:c0:b8:03:e3:81:f5:62:ce:
 22:ae:d9:71:f4:ed:28:f0:29:0b:e3:3c:ea:2d:d8:
 13:fd:00:ff:da:4a:55:b8:70:c3:9f:ef:32:43:4b:
 3f:82:fe:26:31:03:99:fd:b0:1a:2d:7b:f8:b6:65:
 ab:d8:65:f3:c6:f3:e3:06:a9:58:5f:3e:35:0e:4c:
 f0:9e:94:49:66:2e:9c:6c:51:27:62:c1:39:02:cc:
 fb:32:4f:9a:92:f5:f9:99:96:5d:a7:65:5f:1c:fc:
 0a:1e:8b:45:53:06:89:9f:50:11:d6:06:84:a2:f2:
 5f:ab:e4:fb:cf:0d:09:64:d7:7c:99:f9:2a:b7:f5:
 c6:e4:c1:23:24:4e:2b:9f:0b:98:c3:94:93:4f:ca:
 c3:ff:ec:70:9d:df:78:37:56:0d:8b:c4:db:6d:b3:
 73:ac:0a:cb:ac:28:b2:d4:54:61:3e:3c:7e:67:97:
 f5:d9
Exponent: 17 (0x11)

This attacker’s public key is later used to encrypt the random key generated for the particular victim. The random key is the one used to encrypt files – after it is used and destroyed, it’s encrypted version is stored in the victim’s ID displayed in the ransom note. Only the attackers, having the private RSA key, are capable to recover it.

The random AES key (32 bit) is generated by the function provided by Crypto++ library:

It uses underneath the secure random generator: CryptGenRandom:

All the files are encrypted with the same key, however the initialization vector is different for each.

Encrypting single file:

Inside the function denoted as encrypt_file, the crypto is initialized with a new initialization vector:

The fragment of code responsible for setting the IV:

Setting initialization vector:

Encrypting file content:

The same buffer after encryption:

Conclusion

Napoleon ransomware will probably not become a widespread threat. The authors prepared it for small campaigns—lot of data, like email, are hardcoded. It does not come with any external configuration like Cerber that would allow for fast customization.

So far, it seems that the authors fixed the previous bug in Blind of dropping the cache file. That means the ransomware is not decryptable without having the original key. All we can recommend is prevention.

This ransomware family is detected by Malwarebytes as Ransom.Blind.

Appendix

Read about how to decrypt the previous Blind variant here.

 

 

The post Napoleon: a new version of Blind ransomware appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 8, 2017
John
Comments Off on Interesting disguise employed by new Mac malware HiddenLotus

Interesting disguise employed by new Mac malware HiddenLotus

On November 30, Apple silently added a signature to the macOS XProtect anti-malware system for something called OSX.HiddenLotus.A. It was a mystery what HiddenLotus was until, later that same day, Arnaud Abbati found the sample and shared it with other security researchers on Twitter.

The HiddenLotus “dropper” is an application named Lê Thu Hà (HAEDC).pdf, using an old trick of disguising itself as a document—in this case, an Adobe Acrobat file.

This is the same scheme that inspired the file quarantine feature in Mac OS X. Introduced in Leopard (Mac OS X 10.5), this feature tagged files downloaded from the Internet with a special piece of metadata to indicate that the file had been “quarantined.” Later, when the user tried to open the file, if it was an executable file of any kind, such as an application, the system would display a warning to the user.

The intent behind this feature was to ensure that the user knew that the file they were opening was an application, rather than a document. Even back in 2009, malicious apps were masquerading as documents. File quarantine was meant to combat this problem.

Malware authors have been using this trick ever since, despite file quarantine. Even earlier this year, repeated outbreaks of the Dok malware were distributed in the form of applications disguised as Microsoft Word documents.

So HiddenLotus didn’t seem all that interesting at first, other than as a new variant of the OceanLotus backdoor first seen being used to attack numerous facets of Chinese infrastructure. OceanLotus was last seen earlier this summer, disguised as a Microsoft Word document and targeting victims in Vietnam.

But there was something strange about HiddenLotus. Unlike past malware, this one didn’t have a hidden .app extension to indicate that it was an application. Instead, it actually had a .pdf extension. Yet the Finder somehow identified it as an application anyway.

This was quite puzzling. Further investigation did not turn up a hidden extension. There was also no sign of a trick like the one used by Janicab in 2013.

Janicab used the old fake document technique, being distributed as a file named (apparently) “RecentNews.ppa.pdf.” However, the use of an RLO (right-to-left override) character caused characters following it to be displayed as if they were part of a language meant to be read right-to-left, instead of left-to-right as in English.

In other words, Janicab’s real filename was actually “RecentNews.fdp.app,” but the presence of the RLO character after the first period in the name caused everything following to be displayed in reverse in the Finder.

However, this deception was not used in HiddenLotus. Instead, it turned out that the ‘d’ in the .pdf extension was not actually a ‘d.’ Instead, it was the Roman numeral ‘D’ in lowercase, representing the number 500.

It was at this point that Abbati’s tweet referring to “its very nice small Roman Unicode” began to make sense. However, it was still unclear exactly what was going on, and how this special character allowed the malware to be treated as an application.

After further consultation with Abbati, it turned out that there’s something rather surprising about macOS: An application does not need to have a .app extension to be treated like an application.

An application on macOS is actually a folder with a special internal structure called a bundle. A folder with the right structure is still only a folder, but if you give it an .app extension, it instantly becomes an application. The Finder treats it as if it were a single file instead of a folder, and a double-click launches the application rather than opening the folder.

When double-clicking a file (or folder), LaunchServices will consider the extension first. If the extension is known, the item will be opened according to that extension. Thus, a file with a .txt extension will, by default, be opened with TextEdit. Some folders may be treated as documents, as in the case of the .aplibrary extension used for an Aperture library “file.” A folder with the .app extension will, assuming it has the right internal structure, be launched as an application.

A file with an unfamiliar extension is handled by asking the user what they want to do. Options are given to choose an application to open the file or to search the Mac App Store.

However, something strange happens when double-clicking a folder with an unknown extension. In this case, LaunchServices falls back on looking at the folder’s bundle structure (if any).

So what does this mean? The HiddenLotus dropper is a folder with the proper internal bundle structure to be an application, and it uses an extension of .pdf, where the ‘d’ is a Roman numeral, not a letter. Although this extension looks exactly the same as the one used for Adobe Acrobat files, it’s completely different, and there are no applications registered to handle that extension. Thus, the system will fall back on the bundle structure, treating the folder as an application, even though it does not have a telltale .app extension.

There is nothing particularly special about this .pdf extension (using a Roman numeral ‘d’) except that it is not already in use. Any other extension that is not in use will work just as well:

Of course, the example shown above wouldn’t fool anyone, it’s merely illustrative of the issue.

This means that there is an enormously large list of possible extensions, especially when Unicode characters are included. It is easily possible to construct extensions from Unicode characters that look exactly like other, normal extensions, yet are not the same. This means the same trick could be used to mimic a Word document (.doc), an Excel file (.xls), a Pages document (.pages), a Numbers document (.numbers), and so on.

This is a neat trick, but it’s still not going to get past file quarantine. The system will alert you that what you’re trying to open is an application. Unless, of course, what you are opening was downloaded via an application that does not use the APIs that properly set the quarantine flag on the file, as is the case for some torrent apps.

Ultimately, it’s very unlikely that this trick is going to have any kind of significant impact on the Mac threat landscape. It’s probable that we will see it used again in the future, but the risk to the average user is not significantly higher than in the case of any other fake document malware.

More than anything else, this trick opens our eyes to an interesting aspect of how macOS identifies and launches applications.

If you think you may have encountered this malware, Malwarebytes for Mac will protect against it, and will scan for and remove it, if present, for free.

The post Interesting disguise employed by new Mac malware HiddenLotus appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 7, 2017
John
Comments Off on How we can stop the New Mafia’s digital footprint from spreading in 2018

How we can stop the New Mafia’s digital footprint from spreading in 2018

Cybercriminals are the New Mafia of today’s world. This new generation of hackers are like traditional Mafia organizations, not just in their professional coordination, but their ability to intimidate and paralyze victims.

To help businesses bring a good security fight to the digital streets, we released a new report today: The New Mafia, Gangs, and Vigilantes: A Guide to Cybercrime for CEOs. This report details the evolution of cybercrime from early beginnings to the present day and the emergence of four distinct groups of cybercriminals: traditional gangs, state-sponsored attackers, ideological hackers, and hackers-for-hire. We worked with a global panel of experts from a variety of disciplines, including PwC, Leeds University, University of Sussex, the Centre for Cyber Victim Counselling in India, and the University of North Carolina to collect the data within the report.

The guide shines a light on the activities of cybercriminals to understand how they work, to examine their weapons of choice—namely ransomware—and to assess what action is needed to protect against them.

Right now, the New Mafia is winning. We found that ransomware attacks have grown by almost 2,000 percent from September 2015 to September 2017. And cyberattacks on businesses have increased 23 percent in 2017. What these attacks show is that we as an ecosystem—vendors, governments, companies—aren’t learning from our mistakes.

Instead of coming together to defeat a common enemy, the focus remains on shaming victims. Whether they be individuals or companies, we’re all quick to point the finger. But that narrative must change. Those affected by cybercrime are often embarrassed and they don’t speak out, which can have dangerous consequences as organizations delay or cover up breach incidents without a plan to prevent them from happening again. We need to educate the C-suite so that CEOs and IT departments both recognize the signs of an attack and can minimize damages, while educating victims instead of shaming them.

This new mentality is important as we face the future of cybercrime. I read articles every week about the billions of devices that will be connected in the future. While the overarching goal is to make our lives easier, it also presents a threat.

The New Mafia is well prepared to exploit the increase in connected devices from cars to pacemakers. We are still making our way through the Wild West of the Internet of Things with early security solutions and a lack of legislation.

For now, we need to keep our digital streets clean with a collaborative model between the public and private sector, a general awareness about the dangers of cybercrime, and the use of proactive defenses.  Shifting from victim shaming those who have been attacked to engaging with them will remove the crime bosses from our highways of digital opportunity in 2018.

To view the full report, featuring original data and insight taken from a global panel of experts from a variety of disciplines, including PwC, Leeds University, University of Sussex, the Centre for Cyber Victim Counselling in India, and the University of North Carolina, visit here.

The post How we can stop the New Mafia’s digital footprint from spreading in 2018 appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 6, 2017
John
Comments Off on Use TeamViewer? Fix this dangerous permissions bug with an update

Use TeamViewer? Fix this dangerous permissions bug with an update

TeamViewer, the remote control/web conference program used to share files and desktops,  is suffering from a case of “patch it now.” Issued yesterday, the fix addresses an issue where one user can gain control of another’s PC without permission.

Windows, Mac, and LinuxOS are all apparently affected by this bug, which was first revealed over on Reddit. According to TeamViewer, the Windows patch is already out, with Mac and Linux to follow on soon. It’s definitely worth updating, as there are shenanigans to be had whether acting as client or server:

As the Server: Enables extra menu item options on the right side pop-up menu. Most useful so far to enable the “switch sides” feature, which is normally only active after you have already authenticated control with the client, and initiated a change of control/sides.

As the Client: Allows for control of mouse with disregard to server’s current control settings and permissions.

This is all done via an injectible C++ DLL. The file, injected into TeamViewer.exe, then allows the presenter or the viewer to take full control.

It’s worth noting that even if you have automatic updates set, it might take between three to seven days for the patch to be applied.

Many tech support scammers make use of programs such as TeamViewer, but with this new technique they wouldn’t have to first trick the victim into handing over control. While in theory a victim should know immediately if a scammer has gained unauthorised control over their system and kill off the session straight away, in practice it doesn’t always pan out like that.

TeamViewer has had other problems in the past, including being used as a way to distribute ransomware, denying being hacked after bank accounts were drained, and even being temporarily blocked by a UK ISP. Controversies aside, you should perhaps consider uninstalling the program until the relevant patch for your operating system is ready to install. This could prove to be a major headache for the unwary until the problem is fully solved.

The post Use TeamViewer? Fix this dangerous permissions bug with an update appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 6, 2017
John
Comments Off on Internet of Things (IoT) security: what is and what should never be

Internet of Things (IoT) security: what is and what should never be

The Internet has penetrated seemingly all technological advances today, resulting in Internet for ALL THE THINGS. What was once confined to a desktop and a phone jack is now networked and connected in multiple devices, from home heating and cooling systems like the Nest to AI companions such as Alexa. The devices can pass information through the web to anywhere in the world—server farmers, company databases, your own phone. (Exception: that one dead zone in the corner of my living room. If the robots revolt, I’m huddling there.)

This collection of inter-networked devices is what marketing folks refer to as the Internet of Things (IoT). You can’t pass a REI vest-wearing Silicon Valley executive these days without hearing about it. Why? Because the more we send our devices online to do our bidding, the more businesses can monetize them. Why buy a regular fridge when you can spend more on one that tells you when you’re running out of milk?

Internet of Things

Unfortunately (and I’m sure you saw this coming), the more devices we connect to the Internet, the more we introduce the potential for cybercrime. Analyst firm Gartner says that by 2020, there will be more than 26 billion connected devicesexcluding PCs, tablets, and smartphones. Barring an unforeseen Day After Tomorrow–style global catastrophe, this technology is coming. So let’s talk about the inherent risks, shall we?

What’s happening with IoT cybercrime today?

 Both individuals and companies using IoT are vulnerable to breach. But how vulnerable? Can criminals hack your toaster and get access to your entire network? Can they penetrate virtual meetings and procure a company’s proprietary data? Can they spy on your kids, take control of your Jeep, or brick critical medical devices?

So far, the reality has not been far from the hype. Two years ago, a smart refrigerator was hacked and began sending pornographic spam while making ice cubes. Baby monitors have been used to eavesdrop on and even speak to sleeping (or likely not sleeping) children. In October 2016, thousands of security cameras were hacked to create the largest-ever Distributed Denial of Service (DDoS) attack against Dyn, a provider of critical Domain Name System (DNS) services to companies like Twitter, Netflix, and CNN. And in March 2017, Wikileaks disclosed that the CIA has tools for hacking IoT devices, such as Samsung SmartTVs, to remotely record conversations in hotel or conference rooms. How long before those are commandeered for nefarious purposes?

Privacy is also a concern with IoT devices. How much do you want KitchenAid to know about your grocery-shopping habits? What if KitchenAid partners with Amazon and starts advertising to you about which blueberries are on sale this week? What if it automatically orders them for you?

At present, IoT attacks have been relatively scarce in frequency, likely owing to the fact that there isn’t yet huge market penetration for these devices. If just as many homes had Cortanas as have PCs, we’d be seeing plenty more action. With the rapid rise of IoT device popularity, it’s only a matter of time before cybercriminals focus their energy on taking advantage of the myriad of security and privacy loopholes.

Security and privacy issues on the horizon

According to Forrester’s 2018 predictions, IoT security gaps will only grow wider. Researchers believe IoT will likely integrate with the public cloud, introducing even more potential for attack through the accessing of, processing, stealing, and leaking of personal, networked data. In addition, more money-making IoT attacks are being explored, such as cryptocurrency mining or ransomware attacks on point-of-sale machines, medical equipment, or vehicles. Imagine being held up for ransom when trying to drive home from work. “If you want us to start your car, you’ll have to pay us $300.”

It’ll be like a real-life Monopoly game.

Privacy and data-sharing may become even more difficult to manage. For example, how do you best protect children’s data, which is highly regulated and protected according to the Children’s Online Privacy Protection Rule (COPPA), if you’re a maker of smart toys? There are rules about which personally identifiable information can and cannot be captured and transmitted for a reason—because that information can ultimately be intercepted.

Privacy concerns may also broaden to include how to protect personal data from intelligence gathering by domestic and foreign state actors. According to the Director of National Intelligence, Daniel Coats, in his May 2017 testimony at a Senate Select Committee on Intelligence hearing: “In the future, state and non-state actors will likely use IoT devices to support intelligence operations or domestic security or to access or attack targeted computer networks.”

In a nutshell, this could all go far south—fast.

So why are IoT defenses so weak?

Seeing as IoT technology is a runaway train, never going back, it’s important to take a look at what makes these devices so vulnerable. From a technical, infrastructure standpoint:

  • There’s poor or non-existent security built into the device itself. Unlike mobile phones, tablets, and desktop computers, little-to-no protections have been created for these operating systems. Why? Building security into a device can be costly, slow down development, and sometimes stand in the way of a device functioning at its ideal speed and capacity.
  • The device is directly exposed to the web because of poor network segmentation. It can act as a pivot to the internal network, opening up a backdoor to let criminals in.
  • There’s unneeded functionality left in based on generic, often Linux-derivative hardware and software development processes. Translation: Sometimes developers leave behind code or features developed in beta that are no longer relevant. Tsk, tsk. Even my kid picks up his mess when he’s done playing. (No he doesn’t. But HE SHOULD.)
  • Default credentials are often hard coded. That means you can plug in your device and go, without ever creating a unique username and password. Guess how often cyber scumbags type “1-2-3-4-5” and get the password right? (Even Dark Helmet knew not to put this kind of password on his luggage, nevermind his digital assistant.)

From a philosophical point of view, security has simply not been made an imperative in the development of these devices. The swift march of progress moves us along, and developers are now caught up in the tide. In order to reverse course, they’ll need to walk against the current and begin implementing security features—not just quickly but thoroughly—in order to fight off the incoming wave of attacks.

What are some solutions?

Everyone agrees this tech is happening. Many feel that’s a good thing. But no one seems to know enough or want enough to slow down and implement proper security measures. Seems like we should be getting somewhere with IoT security. Somehow we’re neither here nor there. (Okay, enough quoting Soul Asylum.)

Here’s what we think needs to be done to tighten up IoT security.

Government intervention

In order for developers to take security more seriously, action from the government might be required. Government officials can:

  • Work with the cybersecurity and intelligence communities to gather a series of protocols that would make IoT devices safer for consumers and businesses.
  • Develop a committee to review intelligence gathered and select and prioritize protocols in order to craft regulations.
  • Get it passed into law. (Easy peasy lemon squeezy)

Developer action

Developers need to bake security into the product, rather than tacking it on as an afterthought. They should:

  • Have a red team audit the devices prior to commercial release.
  • Force a credential change at the point of setup. (i.e., Devices will not work unless the default credentials are modified.)
  • Require https if there’s web access.
  • Remove unneeded functionality.

Thankfully, steps are already being taken, albeit slowly, in the right direction. In August 2017, Congress introduced the Internet of Things Cybersecurity Improvement Act, which seeks to require that any devices sold to the US government be patchable, not have any known security vulnerabilities, and allow users to change their default passwords. Note: sold to the US government. They’re not quite as concerned about the privacy and security of us civies.

And perhaps in response to blowback from social and traditional media, including one of our one posts on smart locks, Amazon is now previewing an IoT security service.

So will cybersecurity makers pick up the slack? Vendors such as Verizon, DigiCert, and Karamba Security have started working on solutions purpose-built for securing IoT devices and networks. But there’s a long way to go before standards are established. In all likelihood, a watershed breach incident (or several), will lead to more immediate action.

How to protect your IoT devices

 What can regular consumers and businesses do to protect themselves in the meantime? Here’s a start:

  • Evaluate if the devices you are bringing into your network really need to be smart. (Do you need a web-enabled toaster?) It’s better to treat IoT tech as hostile by default instead of inherently trusting it with all your personal info—or allowing it access onto your network. Speaking of…
  • Segment your network. If you do want IoT devices in your home or business, separate them from networks that contain sensitive information.
  • Change the default credentials. For the love of God, please come up with a difficult password to crack. And then store it in a password manager and forget about it.

The reason why IoT devices haven’t already short-circuited the world is because a lot of devices are built on different platforms, different operating systems, and use different programming languages (most of them proprietary). So developing malware attacks for every one of those devices is unrealistic. If businesses want to make IoT a profitable model, security WILL increase out of necessity. It’s just a matter of when. Until then…gird your loins.

The post Internet of Things (IoT) security: what is and what should never be appeared first on Malwarebytes Labs.

Powered by WPeMatico

Dec 6, 2017
John
Comments Off on How to harden AdwCleaner’s web backend using PHP

How to harden AdwCleaner’s web backend using PHP

More and more applications are moving from desktop to the web, where they are particularly exposed to security risks. They are often tied to a database backend, and thus need to be properly secured, even though most of the time they are designed to restrict access to authenticated users only. PHP is used to develop a lot of these web applications, including several dedicated to AdwCleaner management.

There is no magic unique solution to harden a web application, but as always in security, it’s a matter of layers including:

  • Applying the latest security patch and updates
  • Sending the correct HTTP headers
  • Hardening the language stack
  • Hardening the OS
  • Taking network security measures

Since we’re in 2017, we’ll consider that security patches and updates are applied properly so this article will focus on several must-have HTTP headers, as well as how we harden our web stack at a PHP level in an effective and easy way for the AdwCleaner web management application.

Securing a web application using HTTP headers

There are a lot of standard HTTP headers for various uses (like encoding and caching) and a lot of them aim to enforce smart security behaviors, like mitigating XSS, for HTTP clients (i.e web browsers). Here are a few useful ones.

XSS vulnerability example

A website suffering of XSS, without the proper HTTP headers in place to mitigate it.

Strict-Transport-Security

This instructs the browser to connect to the website using HTTPS directly for a certain period of time using the max-age directive. It can also be applied to subdomains with includeSubDomains directive.

Referrer-Policy

This header aims to have a fine-grained control over when the referrer is transmitted. Several directives are available, from no-referrer to completely disable the referrer header to strict-origin-when-cross-origin, which means that the full URL is sent with any request made in TLS in the same domain. (Whereas only the domain is sent as referrer if the request is made on a different domain or subdomain.) Finally, if the request is made in HTTP, the referrer is not sent.

It’s a handy header especially to reduce internal URL leaks to external services.

X-Content-Type-Option

It enforces the MIME type of resources, and states that they shouldn’t be changed. If the MIME type is not the one advertised with the Content-Type header, then the request is dropped in order to mitigate MIME confusion attacks. There’s only one directive: nosniff.

Mozilla Documentation

X-Frame-Options

This header controls whether or not the page can be loaded as an iframe or an object. There are different directives, from DENY to forbid this behaviour, to SAMEORIGIN, which allows it only from the same origin (domain or subdomain), and ALLOW-FROM which allows the operator to specify a whitelist of origins.

RFC 7034

X-Robots-Tag

This controls how the page should be handled by crawling bots (i.e search engines). Several directives exist: the noindex, nofollow, nosnippet, noarchive directives will avoid the page to be indexed in search results and instruct the crawler to not follow the links of the page. The crawler will also not store any copy of the page.

Google documentation

X-XSS-Protection

This legacy header instructs the browser to block any detected XSS request when set to 1; mode=block. It’s now superseded by the Content-Security-Policy header, but is still useful on older web browsers. This header would have mitigated the XSS on the website at the beginning of this article.

Content-Security-Policy

This powerful header allows the operator to define rules specifying how the webpage resources can be loaded and where from. It’s particularly efficient against XSS. For instance, it’s possible to enforce loading resources on HTTPS only using default-src: https:, or to forbid any inline scripts with the directive default-src: ‘unsafe-inline’.

It’s possible to create more complex rules, for instance:

base-uri ‘none’;  Forbid the usage URI.
default-src ‘self’; Will use the origin as fallback for any fetch directive which is not specified.
frame-src; forbid any external content to be loaded using iframes.
connect-src ‘self’; Forbid ping, Fetch, XMLHttpRequest, WebSocket, and EvenSource to load external content.
form-action ‘self’; Enforce the forms submissions to the origin.
frame-ancestors ‘none’; As X-Frame-Options: Deny, it forbids loading the page using iframes, objects, embed, or applets.
img-src ‘self’ data:; Allow tags to use data uris from the origin only.
media-src ‘none’;  Forbid loading any