Archive

Author Archive

Enable Multifactor Authentication(MFA) on ASP.NET MVC

April 2, 2019 Leave a comment

Multi-Factor Authentication (MFA) or sometime called as Two factor authentication is a simple best practice that adds an extra layer of protection on top of your user name and password. Wikipedia says “Multi-factor authentication is an authentication method in which a computer user is granted access only after successfully presenting two or more pieces of evidence to an authentication mechanism”. You should use MFA whenever possible, especially when it comes to your most sensitive data—like your primary email, your financial accounts, personal details etc.

As per the author James Michael Stewart from Global Knowledge, there are three types of Multi-Factor Authentication:

Type 1 – Something You Know – includes passwords, PINs, combinations, code words, or secret handshakes. Anything that you can remember and then type, say, do, perform, or otherwise recall when needed falls into this category.

Type 2 – Something You Have – includes all items that are physical objects, such as keys, smart phones, smart cards, USB drives, and token devices. (A token device produces a time-based PIN or can compute a response from a challenge number issued by the server.).

Type 3 – Something You Are – includes any part of the human body that can be offered for verification, such as fingerprints, palm scanning, facial recognition, retina scans, iris scans, and voice verification.

In this article, we demonstrate how to implement MFA on your ASP.NET MVC application using Google Authenticator.

  1. Create new ASP.NET MVC application
  2. Go to Manage NuGet package window and install “GoogleAuthenticator” package by Brandon Potter.
  3. Once you installed, the corresponding reference is added on your application.
  4. On your Login Controller create a private variable called “Key”. Also do not forget to add the reference of Google.Authenticator on your code file.

private const string Key = ” 2391a4518d30c8c565257bfba097b4a7“;

  1. On your Login method, please put the below piece of code

var setupInfo = new TwoFactorAuthenticator().GenerateSetupCode(“YourapplicationName”, “short description of your application”, Key, 300, 300);//the width and height of the Qr Code;
string qrCodeImageUrl = setupInfo.QrCodeSetupImageUrl; // assigning the Qr code information + URL to string
string manualEntrySetupCode = setupInfo.ManualEntryKey; // show the Manual Entry Key for the users that don’t have app or phone
ViewBag.BarcodeImageUrl = qrCodeImageUrl; // showing the qr code on the page “linking the string to image element”
ViewBag.SetupCode = setupInfo.ManualEntryKey;// showing the manual Entry setup code for the users that can not use their phone

In the first step, we are creating an object of TwoFactorAuthenticator class and calling its GenerateSetupCode method. This method belongs to TwoFactorAuthenticator class and accepts applicationName, small description, key, width and height of QR Code image & returns the object of SetupCode class which is having the details of Account, AccountSecretKey, ManualEntryKey & QRCodeSetupImageUrl. We can use the QRCodeSetupImageUrl property to display the QR Code image on the screen.

To validate, please download GoogleAuthenticator app on your smartphone & scan the QRCode image which we have received and displayed on the screen. As soon as you scan the image from GoogleAuthenticator app, it will shows a 6 digit random number which again you need to pass to TwoFactorAuthentiacator class for verification. You can use below piece of code to achieve this.

var token = Request[“passcode”]; //6 digit code generated by GoogleAuthenticator app
var authenticator = new TwoFactorAuthenticator();
var isValid = authenticator.ValidateTwoFactorPIN(Key, token);
if (isValid)
{
return RedirectToAction(“UserProfile”, “Home”); //Authentication successful
}
return RedirectToAction(“Login”, “Home”); //Authentication failed

Run the application. Login and then use your Google Authenticator phone app to scan  QR Code shown in your web app.

Go back to the web app and type the 6 digit token from your Google Authenticator app

Note: Google Authenticator is a software-based authenticator that implements two-step verification services using the Time-based One-time Password Algorithm and HMAC-based One-time Password algorithm, for authenticating users of mobile applications by Google.

 

 

 

 

 

 

 

 

 

Categories: IIS

Data Annotations for Complex class structures

January 30, 2014 Leave a comment

Data Annotations in C# Classes

When you use data classes (also known as entity classes or POCO) in your application, you can apply attributes to the class or members that specify validation rules, specify how the data is displayed, and set relationships between classes. The “System.ComponentModel.DataAnnotations” namespace contains the classes that are used as data attributes. By applying these attributes on the data class or member, you centralize the data definition and do not have to re-apply the same rules in multiple places.

The “System.ComponentModel.DataAnnotations” namespace contains the following attributes which are used to enforce validation rules for data applied to the class or member:

Validation Attribute

Description

CustomValidationAttribute Uses a custom method for validation.
DataTypeAttribute Specifies a particular type of data, such as e-mail address or phone number.
EnumDataTypeAttribute Ensures that the value exists in an enumeration.
RangeAttribute Designates minimum and maximum constraints.
RegularExpressionAttribute Uses a regular expression to determine valid values.
RequiredAttribute Specifies that a value must be provided.
StringLengthAttribute Designates maximum and minimum number of characters.
ValidationAttribute Serves as base class for validation attributes.

All validation attributes derive from the “ValidationAttribute” class. The logic to determine if a value is valid is implemented in the overridden “IsValid” method. The “Validate method calls the “IsValid” method and throws a “ValidationException” if the value is not valid.

Below code snippet is used to validate the plain entity classes e.g. POCO entities.

private static List<ValidationResult> CallMyFunction(object oObject)

{

var context = new ValidationContext(oObject, serviceProvider: null, items: null);

var results = new List<ValidationResult>();

Validator.TryValidateObject(oObject, context, results, false);

return results;

}

To create customized validation checks, you can either create a class that derives from the “ValidationAttribute” class or create a method that performs the validation check and reference that method when applying the “CustomValidationAttribute” to the data member. When you create a class that derives from “ValidationAttribute”, override the “IsValid” method to provide the logic for your customized validation check.

Note: You cannot automatically validate complex child objects when validating a parent object and include the results in the populated “ICollection<ValidationResult>”.  Data Annotations validator does not validate complex child properties. To do so, slap this attribute on your property (probably a nested view model). For that you will need to make your own validator attribute that validates the child properties.

public class ComplexClassValidation: ValidationAttribute

{

protected override ValidationResult IsValid(object value)

{

var isValid = true;

var result = ValidationResult.Success;

var nestedValidationProperties = value.GetType().GetProperties()

.Where(p => IsDefined(p, typeof(ValidationAttribute)))

.OrderBy(p => p.Name);

foreach (var property in nestedValidationProperties)

{

var validators = GetCustomAttributes(property, typeof(ValidationAttribute)) as ValidationAttribute[];

if (validators == null || validators.Length == 0) continue;

foreach (var validator in validators)

{

var propertyValue = property.GetValue(value, null);

result = validator.GetValidationResult(propertyValue, new ValidationContext(value, null, null));

if (result == ValidationResult.Success) continue;

isValid = false;

break;

}

if (!isValid)

{

break;

}

}

return result;

}

}

There are situations when you need to obtain the list of nested properties within an object using Reflection. For such situations you can use below code snippet.

public static object GetNestedPropertyValue(object customObject, string fullyQualifiedPropertyName)

{

if (!String.IsNullOrEmpty(fullyQualifiedPropertyName))

foreach (string propertyName in fullyQualifiedPropertyName.Split(‘.’))

{

PropertyInfo propertyInfo = customObject.GetType().GetProperty(propertyName);

customObject = propertyInfo.GetValue(customObject, null);

}

if (customObject == null)

throw new Exception(“Property value could not be determined”);

return customObject;

}

Above method takes in the fully qualified property name and the custom object (to which the property belongs) as input parameters and returns all the properties of that object.

For more details about Data Annotations, click here.

Reference:

  1. Validator Class
  2. Validation Result
  3. Validation Context
  4. http://stackoverflow.com/questions/17944211/how-can-i-iterate-through-nested-classes-and-pass-the-object-to-my-function

Builder Design Pattern

September 24, 2012 Leave a comment

Introduction: The Builder Design Pattern helps us to slice the operations of building an object. It also enforces a process to create an object as a finished product. That means an object has to be massaged by some instructed steps before it is ready and can be used by others. The massage process could apply any restrictions or business rules for a complete building procedure (or a procedure we follow to make an object that is considered as a ready to use object). For instance, to compose an email, you can’t leave the To and Subject fields blank before you can send it. In other words, an email object will be considered as an uncompleted email object (a common business rule for email) if those two fields are not filled. It has to be built (filled) before we can send it out.

The Builder Design Pattern allows you to create a general guideline on how to create an object, then have different implementations on how to build parts of the object.

There are two principles in the Builder Pattern; let’s use an example of building an airplane to demonstrate the features:

  • The first principle is the general guideline that must be followed when building an object. For example, in building an airplane, the body must be constructed before the wings. This general guideline must be followed regardless of what type of airplane you are building.
  • The second principle are the different specifications on building the parts of the airplane. When building a jet airplane, the body must be built differently than a propeller airplane. These specifications are included in the pattern. 

With these two principles in place, let’s look at the UML of the Builder Pattern using the example of building an airplane:

The Director class contains the logic of the general guideline. In this case, its BuildAirplane method would specify that an airplane body must be built before the wing. Therefore the code for theBuildAirplane method would be:

public Airplane BuildAirplane(IManufacturer m)      
{         
    m.BuildBody();  //we build the body first
    m.BuildWing();  //then we build the wing
    return m.GetProduct();
}
  • The IManufacture interface specifies the methods that all airplane manufactures must support. We see that it must be able to build the airplane body with the BuildBody method and build the wing with theBuildWing method. The GetProduct method just returns the product that is being built, which is the airplane.
  • The Manufacture class is the concrete manufacture class that has the implementation on building the parts of an airplane, hence it implements the IManufacture interface and holds a reference to theproduct variable, which is the airplane that is being built.
  • The Airplane class is just the final product being built.

With the Builder Design Pattern in place, the client code (calling code) to build an airplane will just be:

Airplane a = director.BuildAirplane(new Manufacture());

The Builder Pattern allows you to create different concrete airplane manufactures that specify how the parts of the airplane are constructed. You can then pass in any manufacture to the director and it will build the airplane according to the specifications without having to change the client code.

The key to the Builder Pattern is that:

  • You only need to determine the sequence in which the product (the airplane) is constructed in theDirector.
  • You can have different implementation of the manufacturers on how the parts of an object (the body and the wings) are constructed.

The benefit of the Builder Pattern is that you can swap out any implementation on how the parts are built by changing the manufactures, and the rest of the client code will not need to be changed.

In application frameworks today, we often see the Builder Pattern being utilized. For example, you may have multiple configuration files that have information on database services, file location services, and notification services. These configuration files would be your manufactures, where each have their own specifications on how each part of the configuration object should be built. The director would specify the way to read the configuration file, for example, you may need to read the database services first before you read the notification services.

Categories: IIS Tags: ,

SAP Connection Manager using .NET Connector 3.0

June 10, 2012 11 comments

SAP Configuration Manager: This is a simple, C# class library project to connect .NET applications with SAP. This component internally implements SAP .NET Connector 3.0. The SAP .NET Connector is a development environment that enables communication between the Microsoft .NET platform and SAP systems. This connector supports RFCs and Web services, and allows you to write different applications such as Web form, Windows form, or console applications in the Microsoft Visual Studio.Net. With the SAP .NET Connector, you can use all common programming languages, such as Visual Basic. NET, C#, or Managed C++.

Features: Using the SAP .NET Connector you can:

  1. Write .NET Windows and Web form applications that have access to SAP business objects (BAPIs).
  2. Develop client applications for the SAP Server.
  3. Write RFC server applications that run in a .NET environment and can be installed starting from the SAP system.

Configuration Steps:

Following are the steps to configure this utility on your project

  1. Download and extract the attached file and place it on your machine. This package contains 4 libraries:
    1. SAPConnectionManager.dll
    2. SAPConnectionManager
    3. sapnco.dll
    4. sapnco_utils.dll

Now go to your project and add the reference of all these four libraries. Sapnco.dll and sapnco_utils.dll are inbuilt libraries used by SAP .NET Connector. SAPConnectionManager.dll is the main component which provides the connection between .NET and SAP.

Once the above three steps are complete, you need to make certain entries related to SAP server on your configuration file. Here are the sample entries that you have to maintain on your own project. You need to change only the values which are marked in red color. Rest remains unchanged.

<appSettings>

   <add key=”ServerHost” value=”127.0.0.1″/>

   <add key=”SystemNumber” value=”00″/>

   <add key=”User” value=”sample”/>

   <add key=”Password” value=”pass”/>

    <add key=”Client” value=”50″/>

    <add key=”Language” value=”EN”/>

    <add key=”PoolSize” value=”5″/>

    <add key=”PeakConnectionsLimit” value=”10″/>

    <add key=”IdleTimeout” value=”600″/>  

  </appSettings>

  1. To test this component, create one windows application. Add the reference of sapnco.dll, sapnco_utils.dll and SAPConnectionManager.dll on your project.
  2. Paste the below code on your Form lode event

SAPSystemConnect sapCfg = new SAPSystemConnect();

RfcDestinationManager.RegisterDestinationConfiguration(sapCfg);

RfcDestination rfcDest = null;

rfcDest = RfcDestinationManager.GetDestination(“Dev”);

That’s it. Now you are successfully connected with your SAP Server. Next you need to call SAP business objects (BAPIs) and extract the data and stored it in dataset or list.

Categories: .NET Connector 3.0, C#, SAP

Hash Collision Attacks in ASP.NET Web Applications

February 11, 2012 Leave a comment

Overview:
Hash tables are a commonly used data structure in most programming languages. Web application servers or platforms commonly parse attacker-controlled POST form data into hash tables automatically, so that they can be accessed by application developers.
If the language does not provide a randomized hash function or the application server does not recognize attacks using multi-collisions, an attacker can degenerate the hash table by sending lots of colliding keys. The algorithmic complexity of inserting n elements into the table then goes to O(n**2), making it possible to exhaust hours of CPU time using a single HTTP request.
Most hash functions used in hash table implementations can be broken faster than by using brute-force techniques (which is feasible for hash functions with 32 bit output, but very expensive for 64 bit functions) by using one of two “tricks”: equivalent substrings or a meet-in-the-middle attack.

1. Equivalent substrings
Some hash functions have the property that if two strings collide, e.g. hash(‘string1’) = hash(‘string2’), then hashes having this substring at the same position collide as well, e.g. hash(‘prefixstring1postfix’) = hash(‘prefixstring2postfix’). If for example ‘Ez’ and ‘FY’ collide under a hash function with this property, then ‘EzEz’, ‘EzFY’, ‘FYEz’, ‘FYFY’ collide as well. An observing reader may notice that this is very similar to binary counting from zero to four. Using this knowledge, an attacker can construct arbitrary numbers of collisions (2^n for 2*n-sized strings in this example).

2. Meet-in-the-middle attack
If equivalent substrings are not present in a given hash function, then brute-force seems to be the only solution. The obvious way to best use brute-force would be to choose a target value and hash random (fixed-size) strings and store those which hash to the target value. For a non-biased hash function with 32 bit output length, the probability of hitting a target in this way is 1/(2^32).
A meet-in-the-middle attack now tries to hit more than one target at a time. If the hash function can be inverted and the internal state of the hash function has the same size as the output, one can split the string into two parts, a prefix (of size n) and a postfix (of size m). One can now iterate over all possible m-sized postfix strings and calculate the intermediate value under which the hash function maps to a certain target. If one stores these strings and corresponding intermediate value in a lookup table, one can now generate random n-sized prefix strings and see if they map to one of the intermediate values in the lookup table. If this is the case, the complete string will map to the target value.
Splitting in the middle reduces the complexity of this attack by the square root, which gives us the probability of 1/(2^16) for a collision, thus enabling an attacker to generate multi-collisions much faster.
The hash functions we looked at which were vulnerable to an equivalent substring attack were all vulnerable to a meet-in-the-middle attack as well. In this case, the meet-in-the-middle attack provides more collisions for strings of a fixed size than the equivalent substring attack.

The different language use different hash functions which suffer from different problems. They also differ in how they use hash tables in storing POST form data.

ASP.NET uses the Request.Form object to provide POST data to a web application developer. This object is of class NameValueCollection. This uses a different hash function than the standard .NET one, namely CaseInsensitiveHashProvider.getHashCode(). This is the DJBX33X (Dan Bernstein’s times 33, XOR) hash function on the uppercase version of the key, which is breakable using a meet-in-the-middle attack.
CPU time is limited by the IIS webserver to a value of typically 90 seconds. This allows an attacker with about 30kbit/s to keep one Core2 core constantly busy. An attacker with a Gigabit connection can keep about 30.000 Core2 cores busy.

Java offers the HashMap and Hashtable classes, which use the String.hashCode() hash function. It is very similar to DJBX33A (instead of 33, it uses the multiplication constant 31 and instead of the start value 5381 it uses 0). Thus it is also vulnerable to an equivalent substring attack. When hashing a string, Java also caches the hash value in the hash attribute, but only if the result is different from zero.
Thus, the target value zero is particularly interesting for an attacker as it prevents caching and forces re-hashing.
Different web application parse the POST data differently, but the ones tested (Tomcat, Geronima, Jetty, Glassfish) all put the POST form data into either a Hashtable or HashMap object. The maximal POST sizes also differ from server to server, with 2 MB being the most common.
A Tomcat 6.0.32 server parses a 2 MB string of colliding keys in about 44 minutes of i7 CPU time, so an attacker with about 6 kbit/s can keep one i7 core constantly busy. If the attacker has a Gigabit connection, he can keep about 100.000 i7 cores busy.
Any website running one of the above technologies which provides the option to perform a POST request is vulnerable to very effective DoS attacks.
As the attack is just a POST request, it could also be triggered from within a (third-party) website. This means that a cross-site-scripting vulnerability on a popular website could lead to a very effective DDoS attack (not necessarily against the same website).

Workarounds:

1. Limiting CPU time
The easiest way to reduce the impact of such an attack is to reduce the CPU time that a request is allowed to take. For PHP, this can be configured using the max_input_time parameter. On IIS (for ASP.NET), this can be configured using the “shutdown time limit for processes” parameter.

2. Limiting maximal POST size
If you can live with the fact that users can not put megabytes of data into your forms, limiting the form size to a small value (in the 10s of kilobytes rather than the usual megabytes) can drastically reduce the impact of the attack as well.

3. Limiting maximal number of parameters
The updated Tomcat versions offer an option to reduce the amount of parameters accepted independent from the maximal POST size. Configuring this is also possible using the Suhosin version of PHP using the suhosin.{post|request}.max_vars parameters.

Categories: IIS, Others