Saturday, December 17, 2016

Functions in Javascript

The power and beauty of JS lies in the concept of functions. They are one of the two crucial aspects of the language (the other one being Objects). Though the language is flexible enough to suit many programming paradigms, it lends itself naturally to functional programming. Anyways, this will not be a post on functional programming rather an objective study of functions and its nuances. 
So jumping right in, functions in javascript are treated as value types. Coming from a Java background it was a drastic change and took some time to get used to that idea. 

  • Functions,
    • are first class objects - What does first class objects mean? In simple terms it means it can be passed around. Think of them as callable objects.
    • provide local scope - A var declared inside a function is visible/accessible anywhere within that function.

  • Functions can be used as plain functions, methods, constructor.
  • Functions that do not explicitly return a value return undefined.
  • Function statement vs Function expression
    • If the first token in a statement is ‘function’ then it is a function statement.
      • function foo() { }
    • Function expression can be 
      • named
        • var foo = function foo() { }
      • unnamed ( also known as anonymous function)
        • var foo = function() { }
  • Hoisting. A var declared anywhere within a function will be hoisted to the beginning/top of the function and initialized to undefined. The actual place where it is declared inside the function will be its assignment. Hoisting works with function statements as well but not expressions.
              var foo;

              // Outputs: undefined
              console.log(foo);

              foo = "Declared";
              // Outputs: "Declared"
              console.log(foo);
  • Function have 2 pseudo parameters:
    • arguments - An array like object (has a length property). This is a special parameter that a function gets along with the parameters it declares. Contains all arguments from the invocation.
    • this - It contains a reference to the object of invocation. Value bound at the time of invocation i.e. depends on the style of invocation :-
      • Function style -
        • foo() // value is either global object (ES3) or undefined (ES5)
      • Method style - 
        • obj.foo() // value is obj
      • Constructor style - 
        • new Foo() // A new object is created and returned
      • Call and apply style - All javascript functions have two methods, call and apply, which let you call functions and explicitly set the value of this
        • foo.apply(obj, args-array) // value is obj
        • foo.call(obj, arg1, arg2, arg3) // value is obj
  • Idea of functions started with Assembly language sub routines
  • In comparison to mathematical functions which never have any side effects, programming language functions usually need to have side effects to do interesting (UI) stuff.
  • Recursion is a powerful paradigm
  • Closure - This a topic which I would like to talk a little more by giving examples. It is only fair since it is such a deep and confusing one. To give you a sense of its complexity I want to mention a quote from the book YDKJS - "Understanding closures is like when Neo sees the Matrix for the first time". 
    • It is the context of an inner function that includes the scope of the outer function
    • An inner function enjoys that context even after the parent functions have returned.
    • So lets see why do we need them in the first place:
      • Global variables:
                     var names = ['zero', 'one', 'two'];
                     var digit_names = function(n){
                       return names[n]
                     };
                     console.log(digit_names(1)); // one
      • Here names is a global variable. What if there is another global variable in the environment? It is going to interfere with this one. To avoid this we can make the name array local to the digit_names function but that will slow things down since it will be created every time the function is called.
      • With closure:
                     var digit_name = (function(){
                         var names  = ['zero', 'one', 'two'];
                         return function(n){ return names[n] };
                     }()); // call immediately

                     console.log(digit_name(2)); // two      
      • Now digit_names will have the function returned which contains the names variable which is invoked only once, but the inner function which was returned will still have access to the names variable.
      • Another interesting thing to note here is this is not captured in closure. So, a common pattern to work around this is to assign this to a variable self/that.
      • Closures have a variety of other uses such as Partial application/curry-ing patterns, Promises, Sealer, Pseudo-classical inheritance, Prototypal inheritance, Module pattern etc.

    References:

    Monday, July 18, 2016

    Understanding the MEAN stack - Part 4 : AngularJS

    So after a long break I am here to complete my view on the MEAN stack with the final link - AngularJS. Angular is the presentation/user interface tier in the MEAN stack application. It is particularly well suited for SPAs (Single page application). 

    What do we mean by a Single Page Application? It is pretty straightforward actually, a web application which works entirely from a single page. To explain in simple terms they have all the elements in a single page as divs and programmatically show and hide these based on what the user wants to do or see. It works as state transitions instead of the traditional page transitions we see in case of multiple page applications.

















    Some other notable features of a single page applications are:
    • Client side MVC
    • Component oriented
    • Asynchronous/ Event driven
    The role of the client is greatly increased in a single page application. Talking about our MEAN stack app, AJAX calls are made to the server (Express web server) from the client and the server responds with JSON objects. The client builds using this raw JSON and updates the div.

    Now lets take a look at some characteristics of AngularJS and how it is useful in a single page application.
    • HTML centric - "Angular is what HTML would have been, had it been designed for applications." This line from the AngularJS docs perfectly describes what it means when we say HTML centric. Infact it wouldn't be inappropriate to say that Angular is HTML. Angular allows adding custom HTML tags called directives. It also offers a set of pre-defined directives with the prefix ng-. A commonly used directive is ng-init which is used to initialize the app data
    • Declarative - You just have to tell what you want and Angular does the job. For example if you are typing something on the client and you want it to show-up on another element as you type you just give it an ng-model name and use this name in the other element within {{ }}
                       <input type="text" ng-model="yourName">
               <h1> Hello {{yourName}} </h1>
             In JQuery you would have to write a script to write the logic to do this.
    • Component-oriented - Angular brings to the table the idea of scopes. The DOM is divided into subsets and each of these are governed by Controllers, Directives & Views. Each controller has a scope and only the div/element which is assigned that controller can access that scope. It adds a layer of isolation between you and the DOM making the code highly cohesive.
    • Dependency injection - One of the most important lessons when it comes to good coding practices is to avoid hard coding stuff at all costs. Dependency injection is a very good way to achieve that and Angular incorporates a couple of different types of dependency injections. Every time we create a dom element that has an ng-controller a controller module is newed up for that dom element and injected.
                   <div ng-controller = "MyController">
    ..
    .. ===> injector.instantiate(MyController);
    ..
    </div>
    Angular has an injector which handles the responsibility of creating the controller and its dependencies. Also when we are defining the controller we can inject the scope, services, filters etc. This is very helpful when writing unit tests as we can inject mocks.

    Being a novice javascript developer understanding Angular style of coding was a little difficult. But I guess that was because I had the common misconception that Angular is a library but it actually fit the definition of a framework and not a library. With Angular you just write your HTML code and let the framework do the rest of the job of running the code. We are just declaring what we need and not bothered about how it is done when is comes to the default directives.

    Sunday, April 3, 2016

    Understanding the MEAN stack - Part 3 : MongoDB

    We are half-way through the MEAN stack right now. We have discussed Node JS and Express JS and now let us discuss the persistence tier in the stack i.e. MongoDB.
    So we now have a platform in place provided by node and a web server in place provided by express which as we discussed largely provides us with our restful end-points. But those restful end-points are no good unless they actually have some persistence associated and that is what MongoDB brings to the table.
    Let us take a look at some features of MongoDB:

    • Non-relational - It does not enforce expectations and constraints like ACID properties on the data as relational databases do.
    • Scalable - It is horizontally scalable. MongoDB employs a method called Sharding in which it divides the data set over multiple servers. 
    • Highly available - It is able to tolerate multiple failure nodes and still be fully available i.e. deliver reads and writes operations.
    • Document database - Instead of storing rows in tables MongoDB stores documents formatted in JSON. Similar structured documents are collectively organized into collections.
    • Flexible - Since it is a simple JSON store you can initially start off with what you think your data model looks like and later change this without affecting the documents. This gives true iterative development capabilities.
    So why MongoDB in the MEAN stack?

    As we said earlier MongoDB brings persistence to our MEAN stack application. But why choose MongoDB over a relational database like MySQL. Node supports MySQL with the node_mysql module. So far we have seen javascript in both the tiers of the MEAN stack and it is also know that Angular.js is a javascript framework. So why take the trouble of having to introduce another language like SQL when you have the other tiers of your application in javascript. 
    • With MongoDB you can have javascript throughout your application and this increases the productivity of the developers working on the application. 
    • Querying data from NodeJS and ExpressJS based server and passing it to the frontend (AngularJS) becomes lot easier since we are storing data in JSON format in MongoDB.
    • Another important advantage of using MongoDB is that it provides flexibility and in a sense a degree of freedom when designing the schema/data model. We can start off with a rough model with what information we have and let it evolve through an iterative and agile approach. To give you an example I was building the todo application and initially I started off with something as simple as the following:
    mongoose.model('Todo', {
        text : String, // The textual content of the todo item e.g.:"Pay bills"
    });
              Later when I wanted to implement a feature to mark a todo item as active or completed I had to add a new property to the todo model to keep track of the status
    mongoose.model('Todo', {
        text : String, // The textual content of the todo item e.g.:"Pay bills"
        active: Boolean // Active status of the todo item. true=>active, false=>completed
    });
    All that said MongoDB has its own set of cons too like any other technology. It is true that MongoDB is fast and scalable but that comes at the cost of strong consistency and durability which any of our traditional relational databases offer.

    Oops! I almost forgot to mention about Mongoose. You must have seen in the above code snippets that I have used an object called mongoose to describe my todo data model. Mongoose is a library that helps in providing a structure to a schema-less datastore like for MongoDB along with some other important features like:
    • Validation.
    • Default data types.
    • Eventing - pre and post save hooks (middlewares)
    • Indexes.
    • One-to-Many relationships.
    • Pseudo-joins - populate method
    Here is nice resource to learn more about the features Mongoose bring to the table.



    Monday, March 21, 2016

    Understanding the MEAN stack - Part 2 : Express JS

    In my previous blog we established that Node JS is actually not the web server but the platform on which you build your web server. Express JS is the actual web server module. Express JS is a Common JS module that you get into your application by requiring from Node JS. Traditionally it was the other way around. We would deploy our application into a web server, but here we bring in the web server capabilities into our application. 

    Before we go further with the properties of Express JS let us understand what exactly does it mean to require in a Common JS module. 

    Common JS modules standardize the way of working with Javascript outside the browser. Common JS is a specification more like a design pattern. There are a number of libraries/modules which are implemented using this specification e.g.: express and you can build your own modules too. Like I told in my previous blog it mainly addresses the single global namespace issue with javascript. Actually a Common JS module is nothing but a javascript file. Each module is written in a single javascript file and has an isolated scope that holds its own variables. In this scope there are 2 key components:
    • module.exports - This object is contained in each module and allows you to expose pieces of code when the module is loaded
    //In myCommonJSModule.js
    module.exports = foo;
    
    function foo() {
        //... do something
    }
    • require - It loads the module into your code
    var myModule = require('./myCommonJSModule.js');
    myModule.foo();

    Ok now let us take a closer look at express. So in my previous blog I told that if you go and take a look at the example code in the about page of Node JS it shows how to create a simple HTTP web server. Clearly Node JS can directly use the http module to create a web server. Then why do we need express you ask? Lets take a look:

    Express extends the core capabilities of http and along with that it brings 2 key functionalities:
    • Middleware
    • Routing
    If you use the http module a lot of work like parsing the payload, selecting the right route pattern based on regular expression will have to be re-implemented.

    Middleware: What is middleware? From the express.js site, "Middleware functions are the functions that have access to the request object, response object and the next middleware function in the application's request-response cycle.". Express provide middleware functions for things like logging the requests to console, parsing the url, parsing the body of the response, setting the location of the static files like css, js and html files.

    Routing: Routing refers to the definition of endpoints (URIs) to an application and how it responds to client requests. A route is a combination of a URI, a HTTP request method (GET, POST etc) and one or more handlers for the endpoint.

    So when a client sends a request to the path the callback executes where we connect to our next tier in our stack i.e. MongoDB.

    So to conclude the key piece that express gives us is the capability to build the web server. But more importantly it gives us the restful end-points that is going to be called from our presentation tier (Angular JS) and once we get into the server methods it is going to reach out to the persistence tier (MongoDB). So in a way it acts like a bridge that gaps the tiers in our application.

    Friday, March 18, 2016

    Understanding the MEAN stack - Part 1 : Node JS

    So a couple of weeks I was given this assignment to build a single page web application using the MEAN (MongoDB, Express JS, Angular JS, Node JS) stack as part of an interview. It was a simple todo application with features functionalities to add a todo item, mark a todo item as completed and delete a todo item. Given the fact that I have very little experience with building modern web applications using javascript it was quite a challenging experience. And I guess the whole point of the assignment was to judge how open I am in terms of accepting to work with new technologies and my ability to quickly ramp up on things. Nevertheless I enjoyed working on the application.
    I had a little exposure to Node JS and MongoDB during one of my academic projects, but the other two Express JS and Angular JS were very new to me.
    So the first order of business was to understand the importance of each of these technologies and how they fit in together for a web application to come to life.
    So lets dive into it right away.

    Node JS

    When you go visit the about page of Node.js the first example they give you is to create a simple HTTP server. I feel this kind of sets an impression on a beginner that Node.js is used for creating a web server. But that is not entirely true. Creating a server is just one of the many capabilities Node.js offers. Node.js is a platform. It is a platform which allows you to use its libraries to build a web server and that is just one application.

    So why Node.js?

    • It buys true platform independence. Like Java, Node.js also offers a write once run anywhere kind of environment and this is important not just for your production targets but for your development platforms as well. Linux users and windows users all can publish to a common target without coming across platform specific issues.
    • Its single threaded and event based so it is fast even when handling lots of requests at once.
    • It has a large number of packages accessible through NPM, a package manager. It includes both client and server-side libraries/modules, as well as command-line tools for web development.
    Next up Node.js uses what are called as CommonJS modules. Now what are CommonJS modules you ask? It is mainly used to solve the problem of Javascript's single global namespace. Here is an article which talks more about that. So Express JS is actually a module which we will require in for our MEAN stack application. So how do you require in a CommonJS module? We will look at this in the next part where I also speak about our next item in the stack, Express JS.

    Tuesday, January 26, 2016

    Effective Java - Item 8 : Obey the general contract when overriding equals

    In this item and the following 4 items the author talks about the non-final methods of the Object class - equals, hashcode, toString, clone. In particular the best ways to override them: when? and how?

    The default implementation of the equals() method uses the "==" relation to compare two objects. Java docs states that "for any non-null reference values x and y, this method returns true if and only if x and y refer to the same object (x == y has the value true)."

    Let us look at the following scenario. I created a custom String class MyString which accepts a String as a parameter. I also created 2 instances of this MyString and passed the same string literal "hello" and compared them using the equals method.


    What do you think will be the output?

    From what we have learnt about the equals method it is obvious that the output will be false.
    Then we might as well have used the "==" operator to check their equivalence.

    But what if we wanted to check the logical equivalence of the two strings. In that case both of them should be equal. This is where the equals method comes into picture. You can override the equals method and give your own implementation to check the logical equivalence of the MyString objects.

    I gave this example of providing a custom wrapper around the String class just to illustrate the application of overriding the equals method.

    So, now that we learnt in what scenario we would want to override the equals method the next step is to learn how to do it.
    The author says that there is a written contract with a set of four rules and one must adhere to it when overriding the equals method.

    1.     Reflexivity

    The first requirement says that the object whose equal method you are overriding must maintain the reflexive property, i.e. the object must be equal to itself.
    Considering the MyString class from our previous example, the following code must output true according to this requirement.
    1
    2
    MyString a = new MyString("Hello");
    System.out.println(a.equals(a));

         2.     Symmetry

    The second requirement says that for two non-null references the equal method of either of the two references should return true iff the equals method of both the references return true. To better understand what this exactly means let us build upon our MyString class and consider that we override the equals method to make it case-insensitive. The following code should output true.
    1
    2
    3
    MyString a = new MyString("Hello");
    MyString b = new MyString("hello");
    System.out.print(a.equals(b));
    But what about in this scenario?
    1
    2
    3
    MyString a = new MyString("Hello");
    String b = new String("hello");
    System.out.print(a.equals(b));
    It should output false else it would violate the symmetric property, because b.equals(a) will be false.
    And why is that so, you ask?
    Think about it, we are only overriding the equals method of MyString class. The String class does not have a clue that we are trying to compare two strings irrespective of their case.

        3.     Transitivity

    The third requirement says the equals method should uphold the transitive property i.e. if any two one object is equal to a second object and the second is equal to a third object then the first object should also be equal to the third object. If the first object is not equal to the third then the equivalence of the first and the second should return false.
    The author says that this requirement can usually be violated when an instantiable class is extended and the sub-class adds a new property.
    So let us extend our MyString class and add a color property.
    1
    2
    3
    4
    5
    6
    7
    class MyColorString extends MyString {
        private final Color strColor;
        public MyColorString(String string, Color color) {
          super(string);
          this.strColor = color;
        }
    }
    Now consider the following three objects
    1
    2
    3
    MyColorString str1 = new MyColorString("hello", Color.RED);
    MyString str2 = new MyString("hello");
    MyColorString str3 = new MyColorString("hello", Color.BLUE);
    These three strings can be compared using the equals method since it should use the instanceof operator (we will see this later  in the conclusion) and str1 is an instance of the parent class MyString.
    So if we do a comparison of the first 2 instances and second and third as:
    1
    2
    str1.equals(str2);
    str2.equals(str3);
    Both of them will return true as they do not consider the color property of the sub-class. But str1.equals(str3) will return false as now the color property comes into account. This clearly violates the transitivity contract.
    A workaround to this problem could be to use getClass() method instead of instanceof operator in the equals method to ensure equating objects of the same implementation class.
    But is this a good solution? Well, according to the author it is not since it violates the Liskov substitution principle and that is an entire topic in itself. The author says that: "There is no way to extend and instantiable class and add a value component while preserving the equals contract". 

        4.     Consistency

    The condition of this rule is quite simple. It says that the equality of any two objects should be consistent i.e. their equality should hold true unless one of them is modified. Here we need to consider if the object we are dealing with is mutable or immutable, because mutable objects can be modified and so they can be equal to different objects at different times but that is not the case with immutable objects.
    The author brings up an important and interesting question here, whether to make a class mutable or immutable? He asks the reader to think hard about this question. Infact he has an item dedicated to this question further in the book.
    Another important point we need to keep in mind when overriding the equals method is not to depend on unreliable sources such as network access. A good example of this is the equals method of java.net.URL

        5.     Non-nullity

    The name of this rule is actually devised by the author. As the name suggests according to this requirement for any object x the following should not return true: x.equals(null) .

    To conclude you can use the following as a recipe for a well implemented equals method:


    So here is an implementation of our MyString class with the equals method


    Saturday, January 16, 2016

    Effective Java - Item 7 : Avoid finalizers!

    Unpredictable! Dangerous! Unnecessary! The author Joshua Bloch makes sure to set the tone of the item right in the beginning by using these adjectives. To augment this view point on finalizers he says that it can be used as a rule of thumb to avoid them altogether. That is a pretty strong point of view.

    Just to give you some background the finalize() method in Java is a special method of the class Object. It will be invoked by the garbage collector (GC) on an object when GC determines that there are no more references to the object i.e. just before GC reclaims the object. It sounds something similar to the destructor in C++, right? So, can it be used to cleanup any external resources like closing files? 

    Wrong!

    The author gives the following 3 reasons to support his point of view:

    • The promptness of the finalize methods' execution is not guaranteed.
    • There is a severe performance penalty for using finalizers.
    • Uncaught exceptions thrown by finalize method is ignored, and finalization of that object terminates.
    The promptness of the finalize methods' execution is not guaranteed

    The author says that there is no fixed time interval between the time an object becomes eligible for garbage collection and the time its finalizer is executed. It is dependant on the garbage collection algorithm which varies according to the JVM implementation. Further he adds that there is no guarantee that the finalize method will ever get executed at all. So now you can imagine how dangerous it is to depend on finalizers.
    I was curious to find out because I have never actually made use of the finalize method in any of my code. So I wrote a small program to test it.

    Here I have one class which had a FileInputStream object and a method which tries to open a file. I have overridden the finalize method where I close the file. I have added one print statement in each block (try-catch-finally-finalize) to better understand the flow of execution. I have also created another class which will create an instance of this class, invoke the openFile method and further nullify the reference to make the instance eligible for garbage collection.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    
    
    public class OpenFileTest {
     
     private FileInputStream file = null;
    
     public void openFile(String filename) {
      try {
       System.out.println("Inside try block");
       file = new FileInputStream(filename);
      } catch (FileNotFoundException e) {
       System.out.println("Inside catch block");
       System.err.println("Could not open file "+filename);
      } finally {
       System.out.println("Inside finally block");
      }
     }
     
     @Override
     protected void finalize() throws Throwable {
      System.out.println("Inside finalize method");
      if(file!=null){
       file.close();
       file = null;
      }
      super.finalize();
     }
    }

    1
    2
    3
    4
    5
    6
    7
    8
    9
    public class TestFinalizeMain {
    
     public static void main(String[] args) {
      OpenFileTest o = new OpenFileTest();
      o.openFile("C:\\shreyas\\CusumCalc\\cusum_output.csv");
      o = null;
     }
    
    }

    Output:
    Inside try block
    Inside finally block

    Wait, that is not what was supposed to happen. What happened to the finalize method being called by our garbage collector?
    So, this is exactly what the author was talking about.

    Here we can see that the finally block did get executed without any issues. So using finally block to deallocate resources is a safe bet.
    There are other ways of forceful execution of the finalize method like System.gc(), but then again it just increases the odds and does not provide any guarantee.

    There is a severe performance penalty for using finalizers.

    The author says that the time to create and destroy an object on his machine apparently increased by 430 times when he used finalizer. Well, if you come to think of it this is actually true. The work of the garbage collector is to scan the heap often and determine which objects are no longer referenced and de-allocate the memory. But if an object uses a finalizer then the garbage collector is interrupted. And it so happens that finalizers are processed on a thread that is given a fixed, low priority. So objects that are otherwise eligible for garbage collection will be pending on finalization and use up all the available memory causing your application to slow down.

    Uncaught exceptions thrown by finalize method is ignored

    Now here is another good reason why you are better off without a finalizer. Any uncaught exception thrown during a finalization is ignored and it will not be propagated further and the finalization halts. Java handles uncaught exceptions by terminating the thread and usually printing the stack trace to the console. But in this case there will be no warning by means of any message being printed.

    So, what is a good alternative if you want to do all your resource releasing work then? The author suggests providing an explicit termination method and requiring the clients of the class to invoke this method on each instance when it is no longer needed would be your best bet. Some good examples of such methods are the close methods on InputStream, OutputStream, and java.sql.Connection
    Another option I could think of is to use the try-with-resources statement provided starting from Java 7.

    Now all that said and done the question remains as what is a real good use of the finalize method? 
    In some rare cases if the user forgets to call the explicit termination method of an object then a finalizer can be used as an extra level of safety to free the resource, better late than never. But the author says that it is better to use safety and precaution if you must use it and one way he suggests is to add the finalization code in a try block when you override the finalize method and invoke the super class finalizer (super.finalize()) in the finally block. This is because "finalizer chaining" is not performed automatically.

    A final thought - You are better off without a finalizer!