Stop piracy now. Increase your revenues.
21 Aug 2017 By Eran Dror
Tamper detection is a mean to ensure application’s integrity. It validates that the application’s object code wasn’t modified after it was released by the software publisher. Software publishers implement tamper detection in order to protect against software piracy & malicious software distribution.
According to the BSA global software survey software piracy accounts for 39% revenues losses for software publishers, this means that fighting unauthorized software distribution can help increase revenues dramatically.
Unauthorized software distribution can’t be achieved unless the attacker modifies application’s object code, the attacker task is to detect & remove the licensing checks from object code thus producing a pirated software copy. Upon execution, the pirated copy doesn’t validate that the user is permitted to run the application.
Tamper detection can help mitigate this form of attack by validating the application object code is authentic and hasn’t been changed since it was published.
Practically, tamper detection is implemented by injecting a piece of code that computes a hash code over the MSIL code. Tamper detection calls are then invoked from target methods selected by the software publisher when configuring the tamper detection protection layer.
This brings us to an interesting analysis done for a large financial firm whose risk management team was implementing tamper detection into their flagship product. The analysis brought up an important question – how can one ensure tamper detection checks aren’t being tampered with?
To help you understand what I mean let’s look at an example showing a method that triggers a tamper check:
public static void a(string[] A_0)
{
Console.WriteLine("This method is tamper protected, or is it?");
string[] strArray = eval_a.a();
// Calling tamper detection check
bool isTampered = Verifier.Verify(Verifier.FindMyAddress(Assembly.GetAssembly(typeof (global::eval_a))), strArray[0], strArray[1], strArray[2]);
if(isTampered)
Environment.Exit(1);
Console.WriteLine("Show me the money!");
}
public static bool Verify(IntPtr baseAddress, string startstr, string endstr, string hashstr)
{
byte[] numArray1 = Verifier.eval_a(startstr);
byte[] numArray2 = Verifier.eval_a(endstr);
byte[] numArray3 = Verifier.eval_a(hashstr);
byte[] b1 = new byte[20];
...
...
...
// Compute method start offset
long int64_1 = BitConverter.ToInt64(numArray1, num3 * 8);
// Compute method end offset
long int64_2 = BitConverter.ToInt64(numArray2, num3 * 8);
// Copy expected hash to byte array b1
Array.Copy((Array) numArray3, num3 * 20, (Array) b1, 0, 20);
totalSize = int64_2 - int64_1;
// Compute start address
startAt = Verifier.eval_a(baseAddress, int64_1);
// Compute hash code over MSIL code
inputStream = (Stream) new FixedLengthStream((Stream) new AnyMemoryStream(startAt), totalSize);
byte[] hash = new SHA1CryptoServiceProvider().ComputeHash(inputStream);
// Compare hash codes, if method was tampered signature won't match
flag &= Verifier.ArrayCompare(b1, hash);
return flag;
}
The tamper check computes a hash over the MSIL code and compares the results to the expected hash calculated when the protection was first applied. Alas, the call to the tamper check is completely exposed and can be easily removed by a novice hacker using a hex editor.
This problem poses a perfect use case for code virtualization technique. One of the properties of code virtualization is a complete obscurity of the original MSIL code bytes, this is due to a one-way transformation applied on the code bytes. A transformation that creates an enhanced instruction set which can’t be processed by a decompiler.
See how the code looks like after code virtualization has been applied on the same method:
public static void a(string[] A_0)
{
CSVMRuntime.RunMethod("33e439c4-0c64-4504-9967-1d3055fbe42f", new object[1]
{
(object) A_0
});
}
The entire method is hidden away, the method implementation has been isolated by the code virtualization engine and will be processed by it when the method is executed.
Call sites to the verification process are no longer visible using a decompiler, in addition tracing the verification method by looking for methods that reference methods used by it (e.g. ComputeHash) is not an option since the virtualization engine virtualize those calls as well.
17 Apr 2017 By Eran Dror
Lately we’ve been working closely with a security group, working for a well-known Fortune 100 company. Our task was to help the company improve the protection for one of its core products, on which many of its software assets are depending on.
It turns out the team decided to develop their own home grown solution to prevent unauthorized debugging by the competition or by hackers looking to distribute unauthorized copies of their software.
No doubt, a debugger is a powerful tool for watching and interacting with an application, it can be used by an attacker to step through the application code, help in understanding how sensitive algorithms work. Using a debugger one can change application state at runtime in order to bypass licensing or any other checks – therefore the motivation behind placing debugging checks which the team had taken makes perfect sense.
During the examination process, one of the things that immediately caught our attention is the team’s decision to use mscorlib’s Debugger.IsAttached method to detect whether a debugger is attached to the running process.
This pose a major security risk, mainly because by performing a CLR injection attack one can practically change the implementation of the above method, having it return a false response regardless whether a debugger is actually present.
A CLR injection attack is performed by intercepting arbitrary binary functions. Interception code is applied dynamically at runtime by replacing the first few instructions of the target function with an unconditional jump to the user-provided method.
The code of the target method is modified in memory, not on disk, thus facilitating interception of binary functions at a very fine granularity. For example, the methods in a DLL can be modified in one execution of an application, while the original procedures are not detoured in another execution.
To illustrate the point, I’ve created a simple console application demonstrating a CLR injection attack on Debugger.IsAttached method. You can find the source and documentation on Github.
public static void Main(string[] args)
{
Console.WriteLine(string.Format("Is Debugger Attached: {0}", Debugger.IsAttached));
var isAttachedMethodInfo = typeof(Debugger).GetProperty("IsAttached").GetGetMethod();
var myDebuggerIsAttachedMethodInfo = typeof(Program).GetMethod("MyDebuggerIsAttached");
// Make sure methods are JITted.
RuntimeHelpers.PrepareMethod(isAttachedMethodInfo.MethodHandle);
RuntimeHelpers.PrepareMethod(myDebuggerIsAttachedMethodInfo.MethodHandle);
// Get a pointer to the method's implementation
var originalPointer = isAttachedMethodInfo.MethodHandle.GetFunctionPointer();
// Get MyDebuggerIsAttached method address
byte[] myDebuggerIsAttachedAddress = BitConverter.GetBytes(myDebuggerIsAttachedMethodInfo.MethodHandle.GetFunctionPointer().ToInt32());
// Jump to MyDebuggerIsAttached and ret
var newCodeBytes = new byte[]
{
0x68, myDebuggerIsAttachedAddress[0], myDebuggerIsAttachedAddress[1], myDebuggerIsAttachedAddress[2], myDebuggerIsAttachedAddress[3], 0xC3
};
uint oldProtect = 0;
VirtualProtect(originalPointer, 60, 64U, out oldProtect);
Marshal.Copy(newCodeBytes, 0, originalPointer, 6);
Console.WriteLine(string.Format("Is Debugger Attached: {0}", Debugger.IsAttached));
// Run the debug target
Console.WriteLine("Launching DebugTarget.exe");
AppDomain.CurrentDomain.ExecuteAssembly(@"DebugTarget.exe");
Console.WriteLine("Press any key to continue");
Console.ReadKey();
}
A few notes about the implementation: