Sunday, March 6, 2016

Designing a Database

Before analyzing most important factors to consider when designing a database, one must be aware of what is database design. It is the process of generating detailed data modal of a database. A data model organizes data elements and standardizes how the data elements relate to one another.
Data model may be conceptual, logical or physical in nature. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. The logical data model captures the detailed business solution. The logical data model looks the same regardless of whether we are implementing in MongoDB or Oracle. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things. Physical data model describes the physical means by which data are stored.
While designing a database, designer must follow these steps,
·         Determine the data to be stored in the database. This involves understanding the business and how the application is being proposed to behave.
·         Determine the relationships between the different data elements. Designer must be aware of and document how the business entities being discussed are interrelated.
·         Superimpose a logical structure upon the data on the basis of these relationships. Now designer must map business entities and logics to tables, views, primary key, foreign keys, normalization rules etc. In an Object database the storage objects correspond directly to the objects used by the Object-oriented programming language used to write the applications that will manage and access the data.
Some of the most important things to keep in mind while designing database are as follows:
1.      Understand Business
As mentioned above in steps of database design, understanding the business rules is most important, rest comes automatically by experience.
2.      Load on application and Volume of Data
You must be very much sure of how much load and concurrent users you are expecting. Do you need multiple servers or single server for both update and read? Always plan for much higher loads and data volume than anticipated as of today.
3.      Indexes
The application is going to be to more towards read operations or update/add? Indexes are likely to help more for a fast retrieval of records from a table. But if the update/add is going to be too much for the server to perform, it may actually reduce the performance.
4.      Normalization
The database designed must be structurally correct and optimal. Normalization rules may help a lot in this, but sometimes for the sake of performance, design being very specific to an application, these rules may be forgiven.
5.      Information  Integrity
Most of the big names database engines automatically enforce data integrity, but yes as a database designer, you may not be able to enforce certain business validation in DB itself, these must be clearly specified in the design documents. As a database designer, things like primary keys, foreign keys, transactions, triggers etc. are your responsibility.
6.      Security
Specifying Mode of authentication and hiding sensitive information via encryption is essential if business needs it. I have seen many deployments where multiple projects were using same credentials to access multiple databases. This is clearly not a standard practice to be followed. You must be aware of all the auditing features provided by the database engine being used.
7.      Backup and deployment policies
You must be aware of backup of deployment techniques and policies the organization is using, for whom you are designing the database.
8.      Programming platform
Database designer should be aware of the consuming application technology.
If you follow above mentioned thumb rules, most of the issues in production environment may be avoided even before they are encountered. Never forget those restless nights when a production issue comes and do the best you can well in time.
Chapple, M. (n.d.). Database Normalization Basics. Retrieved February 17, 2016, from
Chuan, C. H. (2010, September). A Quick-Start Tutorial on Relational Database Design. Retrieved February 17, 2016, from
Database design tutorial. (n.d.). Retrieved February 17, 2016, from
Feasel, K. (2014, April 24). SQL Injection: How it Works and How to Thwart it. Retrieved February 17, 2016, from
IBM. (n.d.). DB2 Version 9.7 for Linux, UNIX, and Windows. Retrieved February 17, 2016, from
Maulo, F., Bohlen, S., Maughan, J., Brown, R., Zaytsev, A., & Berggren, O. (2015, August 17). NHibernate (Version [Computer software]. Retrieved February 17, 2016, from
Microsoft. (n.d.). Maximum Capacity Specifications for SQL Server. Retrieved February 17, 2016, from
Oracle Corporation. (n.d.). 8 Database Auditing: Security Considerations. Retrieved February 17, 2016, from
Oracle Corporation. (2016, February 5). MySQL (Version 5.7.11) [Computer software]. Retrieved from
Oracle Corporation. (n.d.). Oracle Database Limits. Retrieved February 17, 2016, from
PostgreSQL Global Development Group. (2016, February 11). PostgreSQL (Version 9.5.1) [Computer software]. Retrieved from
Rojansky, S., Lenngren, E., Figueiredo, F., Jr., Uno, K., Asher, J., Cooley, J., . . . Saito, H. (2015, September 18). Npgsql for Entity Framework (Version 2.2.7) [Computer software]. Retrieved from
SoftwareInsider. (n.d.). MySQL. Retrieved February 17, 2016, from

Telly, M. (2009, February 24). What are the most important considerations when designing a database? Retrieved February 17, 2016, from

Get or Post ? - Yes it does matter!!

Get and Post are two of the ways to request server from a client using Hypertext Transfer Protocol (HTTP).
Get means retrieving information from server (in format as defined by the agreement,) identified based on Request-URI.
We may opt for Conditional get Request, where header may contain If-Modified-Since, If-Match, If-None-Match, or If-Range. This is to reduce load over the network. So, partial Get is also supported here with range header.
Important points to be noted here to choose Get or Post:
1.      As a general rule, in a typical form submission with METHOD="GET", the browser constructs a URL by taking the value of the action attribute, appending a “?” to it, then appending the form data set. The encoding type used in Get request may be “application/x-www-form-urlencoded”.
2.      GET requests can be cached. Also, you may bookmark and look into browser history to retrieve complete request later, but bookmarking and browser history is more of the client features and vary as per what client you are using to make Get request.
3.      Please note, only ASCII characters are allowed while using Get.
4.      You may not be able to hide sensitive information (query string parameters) and even if you use https, still the server logs will contain this information. So for transferring sensitive information Post discussed below is a better choice.
5.      Amount of information you may pass on to server will be limited in case of Get, URL length limit is 2083(1024 in certain cases).It is recommended to keep number of parameters in query string less than 2K, but some servers even handle up to 64K. All in all, you should have a justification like caching or anything like that to keep url’s so large if you want to stick to Get.
Off the topic, in PHP, you may use QUERY_STRING environment variable to retrieve the parameters passed in url. You may also use $_GET to get the array of sent data.
In this data posted is part of message body. This is used for data submission and caching is not an option here unless the response includes appropriate Cache-Control or Expires header fields.
Important points to be noted here to choose Get or Post:
1.      As a general rule, in a typical form submission with METHOD="POST",  a POST request is sent, using the value of the action attribute and a message created according to the content type specified by the enctype attribute. The encoding type used in Post request may be “application/x-www-form-urlencoded or multipart/form-data. Use multipart encoding for binary data”. You may pass query string parameters in a Post request too, if you wish to.
2.      Typically caching is not an option here under common scenarios. The most common browsers don't support bookmarking or history of the complete request.
3.      There are no restrictions on data type in the Post request. Binary data is also allowed.
4.      Post is a little bit safer than Get, since parameters are in message body as a thumb rule. But in case of Get on https, web travel is equally safe. Still server logs may be one of the reasons here to switch to Post in case of sensitive information passed on.
5.      Length of information posted may be huge in case of Post.

Off the topic, in PHP, you may use $_POST to get the array of sent data, based on complexity of the data.

Conditional GET Request. (2005). Retrieved February 02, 2016, from
GET vs POST. (n.d.). Retrieved February 02, 2016, from
HTTP Methods: GET vs. POST. (n.d.). Retrieved February 02, 2016, from
HTTP/1.1: Method Definitions. (n.d.). Retrieved February 02, 2016, from
Nottingham, M. (2012, September 24). Caching POST. Retrieved February 02, 2016, from
PHP GET and POST Method. (n.d.). Retrieved February 02, 2016, from

LAMP stack

LAMP, as the name suggests is a typical model of software subsystems (historically Linux, Apache, MySQL and PHP) bundled together to give a platform on the top of which web service based and similar applications may be deployed.
1.      Components are largely interchangeable. The software subsystems may vary, but since Linux, Apache, MYSQL and PHP are free, this combination is considered to be most common.  Other common variants are:
·         LAPP - Linux, Apache, PostGreSQL, Perl/Python/PHP
·         WAMP - Windows, Apache, MySQL, Perl/Python/PHP
·         MAMP - Macintosh, Apache, MySQL, Perl/Python/PHP
·         BAMP - BSD, Apache, MySQL, Perl/Python/PHP
·         WIMP - Windows, IIS, MySQL, Perl/Python/PHP
·         AMP - no operating system
We may justify why LAMP stack, by listing all the benefit of the components used and beauty is the interchangeability of the components, so choose the benefits you like.
2.      Availability of vast variety of free plugins make it even more lucrative; for example netsniff-ng (free Linux network analyzer and networking toolkit), Snort (an open source network-based intrusion detection system), RRDtool (round-robin database tool), Nagios (offers monitoring and alerting services for servers, switches, applications and services),collectd (a Unix daemon that collects, transfers and stores performance data of computers and network equipment) and  Cacti(web-based network monitoring and graphing tool) etc.
3.      Supports multiple server farm solutions for high loads and better availability. This is possible with additional components like load balancer.
4.      Deployment of applications is straight forward, in most scenarios, it may be copy and paste content. For most of the Linux based system, this is available by default.
5.      Security may not be an issue, being an open source and large user base, problems are resolved faster than what you could expect from other paid products.
6.      You may use various free packages to configure LAMP and similar solutions. One such famous package is XAMPP (cross platform, Apache, MYSQL/MariaDB, PHP, Perl) with vast variety of add-ons available for XAMPP.
Drawbacks of LAMP stack
Most of the time, when we are counting drawbacks of LAMP, we are cursing one of the software component being treated as one of the four layers or how they are integrated.
For example we may say, Apache is not the highest performant today in market, or point some issues in PHP or MYSQL and so on. 
You have to be very careful about choosing the right components, say the functionalities you need in your application need PHP 7.0, but if solution you are going for doesn’t support latest version at all, you may be in trouble (even though it may not be considered as a drawback for you since you know this issue and you will plan accordingly now). 
Alternative Technologies
LAMP is used to serve web content and pretty much everything which is capable of this feature, is an alternative in my opinion, be it the c# and .Net based webAPI’s or Java Servlet API  and so on. But when we talk about alternative technologies, we are generally changing one of the components and comparing them due to a variation in underlying components. For example LEMP stack (Linux, Nginx, MariaDB/MySQL, PHP) is a variant where Nginx replaces Apache. Few of the other variations are listed under “Why LAMP?” section above.
Mean Stack is termed as collection JavaScript based technologies used to develop web applications and I feel the strongest competitor today. It consists Node.js (server side JavaScript execution environment), Express (lightweight framework used to build web applications in Node, MongoDB ( schemaless NoSQL database system, considered far more better than MYSQL and similar) and AngularJS ( JavaScript framework developed by Google).
§  MongoDB is built for cloud and with lesser cost for better hardware today makes it more appealing than MYSQL and related. (But who says DB layer in LAMP cannot be MongoDB !!!)
§  Agreed, Node.js simplify the application development.
Benefits of Mean stack over LAMP stack are covered in lot more depth by Wayner (2015).
Brown, M. (2007, August 24). Understanding LAMP and its effect on web development. Retrieved from
Howitt, J. M. (2010, January 27). What are the advantages and disadvantages of running PHP on Windows [Web log comment]. Retrieved from
Leith, E. (2011, March 28). What are some disadvantages of LAMP stack? [Web log comment]. Retrieved from
LEMP [Computer software]. (2016). Retrieved from
Telly, B. (2013, March 08). Re-writing a large web application - alternatives to LAMP [Web log comment]. Retrieved from
Wayner, P. (2015, June 22). LAMP diehards take note: The flexible simplicity of MongoDB, ExpressJS, AngularJS, and Node.js is no joke. Retrieved from

XAMPP [Computer software]. (2015, December 31). Retrieved from

Are you a successful Web Developer?

Features the websites are expected to deliver had been ever increasing. As the scope becomes vast so do the opportunities for genii who have the potential to materialize these business needs. Now the question is what makes you as developer and path finder qualified enough to survive in this dynamic domain!!

Well, I have no prescription pills for you today which you can eat and you are good to go. But I may give you glimpses of, in what direction digital world is moving and what is expected today. It may be altogether a different story five years down the line!! It was only in 1990 when the first page was served on the open internet, just in twenty five years, look around where we are. The way content in managed and delivered or consumed is altogether took a different direction. Does that mean you want to be mere a speculator? I assume no, that’s why you are with me here today, right.

Those days are gone, when these tiny websites were used to deliver unsecured content and online content was considered to be optional for big names to be available. Today these websites are serving not only public content but highly confidential material which may worth billions and make an organization bankrupt if leaked. Now the only way left with we developers is to be a hacker. Does is it sounds crazy? But I mean it, the security measures we develop as we evolve to secure content is meant to be tweaked around and hacked at some point of time by someone smarter than the creator. So we need to constantly be one step ahead. The only way to maintain balance for own good is to patch the security hole before someone else find it.

If you have worked with any big service based organization, you must have observed they emphasize and appreciate reusable assets across the verticals and horizontals a lot. We see more and more versatile tools/products/complete solutions floating around in today’s era. Does this mean if you are making something reusable or launching a multidimensional versatile product, you are eating up developer’s job? This may sound crazy but answer is no. The rate with which demand is increasing supply is far behind. It only means we are trying to use resources in a better manner. But if you are kind of developer who is not willing to learn new things and utilize other’s efforts judiciously, in today’s market, it is difficult for you to survive. Yes, you must attain an expertise in a technology but don’t do the mistake to be dump enough and thrown off the business. World is changing around open your eyes.

Many new websites and solutions are launched very often today, but the one which earn more customers are able to survive. Here comes the unspecified and ever changing common sense of developers which they can utilize to make the projects profitable enough and earn repetitive business. You as a developer must create a product keeping usability aspects in mind. This may have included only a better css in past but today it means a lot more. A website may die soon without being device independent and companion iOS and android apps. This also means there is wider scope for developers to exist today in itself.

This may seem an endless topic to discuss but there is a finite shortcut here which you may always keep in mind. Now does it sound like a magic path? Well possibly it is but I call it a pointer, nothing more than that. Right from Archie search engine in 1990’s to google today, master minds are toiling to make it easy for end user to reach content of their interest as fast as possible. Why not keep eyes on search engine ranking logics prevalent the day today!! Google may rate a website better if it is mobile friendly, faster, authentic content wise, secured enough, liked by the mass (and many more).

But remember, these are only some of the parameters you may follow to help realize goals of your client.  If you win your client’s faith you are successful in this domain and remember client is smarter than you always, that’s why they hired a genius like you. If technology and you as a developer are evolving, so are the consumers becoming smarter.


Westerwick, A. (2013), Effects of Sponsorship, Web Site Design, and Google Ranking on the Credibility of Online Information. Journal of Computer-Mediated Communication, 18(2), 80–97. doi: 10.1111/jcc4.12006

Friday, March 4, 2016

Web is stateless

HTTP is a stateless protocol. You may use cookies and sessions to maintain the state of an application specific to an end user.
What are the differences between sessions and cookies?
HTTP cookie (web cookie/ Internet cookie, browser cookie or simply cookie) is a small piece of data sent from a website and stored in the user's web browser while the user is browsing. If  programmer doesn’t assign an expiration date to the cookie, it is lost with browser close. These are In-memory cookies for the browser, on the other hand you may set expiration date for the cookie to make it Persistent cookie. Persistent cookie deems to be stored on the client side hard drive and retrieved on next visit of website based on expiration date set by server side..
Programmers may use Response object to create and set cookie values, and Request object to retrieve the values of cookies created during previous interaction. Cookies are associated with a website, not with a specific page, so the browser and server will exchange cookie information no matter what page the user requests from your site (exception: see benefit section for Path property of Cookies).
1. The security of cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs. You should never store sensitive data in a cookie, such as user names, passwords, credit card numbers, and so on. Do not put anything in a cookie that should not be in the hands of a user or of someone who might somehow steal the cookie information. This also means, on server side you should safeguard server side logics with extra validations, when you are taking inputs from cookies. (Less secured than session)
2. Most browsers support cookies of up to 4096 bytes. This limit is applied to the name-value portion of the cookie only. (No size limits in session)
3. Most browsers allow only 20 cookies per site; if you try to store more, the oldest cookies are discarded. Some browsers also put an absolute limit, usually 300, on the number of cookies they will accept from all sites combined. So, you may have to create cookies with sub-keys, in case you are reaching the count limits. ( No count limit in sessions)
4. User has the rights to deny using cookies. As per Cookie Law, your website must inform visitors how you use cookies. Also, you must write a dummy cookie in your web application implementation and read on server side, to verify if current browser of user is supporting cookies. (Session is server side, so user can’t handle /control sessions)
5. User has the ways to clear cookies on his web browser, no matter what expiration time you mentioned. (User can’t clear sessions without exposed functionality given to user)
6. You must check for nonexistence of a cookie key in request object to avoid null reference errors. (But similar is also true for sessions too)
7. In the request object, you will not get the expiration date of the cookie, if you are too much concerned about expiration date; you need to reset it every time on server side. (Session timeout is controlled in web.config in .net apps)
1. You may limit the scope of specific cookie to specific folder of website by defining Path property of cookie.
2. You may set the Domain property of cookie to limit scope to a specific domain/sub-domain.
3. You may request a browser to delete a cookie by setting expiration date earlier than current time, say yesterday.
4. You may create a new cookie on server side with same name as existing cookie and send its value to client, to modify an existing cookie.
5. We will discuss later in this post, what are server side sessions and how they may get benefit from cookies.
A session can be defined as server-side storage of information that is desired to persist throughout the user's interaction with the web site or web application. Dealing with sessions without cookies is a mess as described in benefits of cookies above. Web applications transmit session Ids from server side as cookie, so that during next request session id in next request may identify the session. Some older browsers do not support cookies or the user may disable cookies in the browser, in that case sessionId may be munged in the each href clickable on the page. This seems more unsecured to me than disabling cookies though.
Types of Session Implementation: Sessions may be implemented in-memory on server side (InProc session mode), using state server services (Aspnet_state.exe), SQL server or by custom providers in .Net applications and SQL Server session mode is a more reliable & secure session state management as per Jana (2009). In PHP based applications, you may edit php.ini and set the session.save_handler and use external DB to store sessions as per Waterson (2015). Possible values for save_handler in PHP could be files (default), mm, database and SQLLite. PHP provides a function that lets you override the default session mechanism by specifying the names of your own functions for taking care of the distinct tasks as per Shiflett (2004).
1. Easier to maintain user specific data across all the requests.
2. Kind of objects being stored are vast.
3. Much more secured and hidden from user as compared to cookies.
4. Under in-memory model, session data in a memory object of the current application domain. So accessing data is very fast and data is easily available.
5. There is not requirement of serialization to store data in InProc session mode.
6. In .Net Sessions may be handled at page level too. We may disable session or make it read-only on a specific page using EnableSessionSate property of page.
1. There is an overhead involved to serialize and de-serialize  objects in case of StateServer  and SQLServer session modes.
2. Under InProc session mode, If the worker process or application domain is recycled, all session data will be lost. We may want to switch to state services, external DB or custom provider here.
3. InProc session mode is the fastest, more session data and more users can affect performance, because of memory usage.
4. Under multiple server farm scenarios, InProc session mode is not used at all.
When might a developer choose one or the other?
First thing first, as defined in definition of session above, session needs cookie to be implemented for storing session Id at least. If a specific information need not be secured (as defined in  limitation 1 of cookie above), and the size is small, cookie may be chosen to store this information, e.g. cart items of the user, user preferences without login for next visit on the same site and so on. On the other hand, secured and heavy information is never passed on to client side cookies as defined in benefit 3 of session above.
Are there any privacy or security implications to using either?
Session objects are on server side, that does not mean they are fully secured, but for sure much more secured than cookies on client side they are. Sessions will be as much secure as you make them, but cookies don't qualify to be in this race as defined by limitation 1 of cookies. You may try to make session more secured by using external DB or use EnableSessionSate property of web pages to make them more secured in .Net.
What benefits do they provide to the developer that might override those privacy and security implications?
Cookies put less overhead on the server side, if the information need not be secured like tracking cookies, website usability etc. we may choose it. Detailed benefits and limitations of cookies as compared to sessions may be analyzed in Limitation/Benefit sections above.
Auger, R. (2011, January). Cross Site Scripting. Retrieved March 03, 2016, from Site Scripting
Iain. (n.d.). Browser Cookie Limits. Retrieved March 03, 2016, from
Jana, A. (2009, January 23). Exploring Session in ASP.NET. Retrieved March 03, 2016, from
LassoSoft. (n.d.). Lasso Programming: Tutorial: Understanding Cookies and Sessions. Retrieved March 03, 2016, from
Microsoft. (n.d.). Maintaining Session State with Cookies. Retrieved March 03, 2016, from
Microsoft. (2011). ASP.NET Cookies Overview. Retrieved March 03, 2016, from
Optanon. (n.d.). The Cookie Law Explained. Retrieved March 03, 2016, from
Shiflett, C. (2004, December 14). Storing Sessions in a Database. Retrieved March 03, 2016, from

Waterson, K. (2015, May 8). Introduction To PHP Sessions. Retrieved March 03, 2016, from

Saturday, February 27, 2016

Security threats associated with exposed server side error details

Security threats associated with exposed server side error details
This topic may be categorized under penetration testing and hacking for a website. According to Open Web Application Security Project (2014), there are many SQL Injection exploitation techniques that utilize detailed error messages from the database driver. Further in depth testing and code review may help determine possible vulnerabilities and minimize the risk.
Improper error handling is not only unpleasant for the end user, but also serves as a starting point for the hackers to define strategy by exposing high level and low level software components deployed to build an application. It may include how the website is logically built up from top to bottom along with database schema. If an expert attackers knows exactly the building blocks and DB schema of an application, he is half done stealing the confidential information you may be hiding from anonymous users.
All the software bundles provide developers with basic building blocks which may be used to do robust error handling. For example:
·         Apache is a common HTTP server for serving HTML and PHP web pages. By default, Apache shows the server version, products installed and OS system in the HTTP error responses. Responses to the errors can be configured and customized globally, per site or per directory in the apache2.conf using the ErrorDocument directive. In case of error, Apache can be configured to output hardcoded error message, a customized message, redirect to external or internal page using ErrorDocument directive. Administrators may configure AllowOverride using .htaccess file. For allowing ErrorDocument you need to set AllowOverride to All. ServerTokens and ServerSignature may be configured to hide server specific information in http errors.
·         Generally Microsoft technologies based web applications are deployed on Internet Information Services (IIS). In a typical .Net web application, developers may suppress the unhandled errors being exposed to users by custom error page. A .Net web application which shows yellow screen of error is built by novice team of developers. In some of the applications, error handling is taken very seriously using custom exception handling http modules. Along with custom error page a unique identifier is sent as a hidden variable to the client side, which end user may be easily instructed to share with support team. This unique identifier is generated by the custom http error handling module, and saved along with error information in multiple possible ways. Some the applications I have seen use third party frameworks like Elmah and Log4net for robust logging in flat text files and error database. A detailed low level application design defines what information is required by end user to do correction in input data and what else is to be hidden by the error handling modules assigning a tracking unique identifier.
Next time, when you are in development phase of a web application, remember that your responsibility to handle errors does not end with a try catch finally. There must be low level details specified and planned for well ahead. Some hacker on the other corner of this world is waiting for you to shirk work.

Apache Software Foundation. (2015, December 10). Log4net (Version 2.0.5) [Computer software]. Retrieved from
Aziz, A. (2012, April 13). ELMAH (Version 1.2.2) [Computer software]. Retrieved from
Open Web Application Security Project. (2014, August 8). Improper Error Handling. Retrieved February 23, 2016, from
Penn Computing. (2016, February 26). SWAT Top Ten: Improper Error handling. Retrieved February 26, 2016, from

The Apache Software Foundation. (2016). Apache Core Features. Retrieved February 26, 2016, from

Sunday, June 28, 2015

Analyze loopholes in SharePoint Security Framework

Annotated References
Dokic, D., Zrakic, M. D., Bogdanovic, Z., & Labus, A. (2015). Application of SharePoint Portal Technologies in public enterprises. Revija za univerzalno odli─Źnost [Journal of Universal Excellence], 4(1), A11-A25. Retrieved from
This paper deals with application of portal technologies for enhanced content management, document management, and collaboration within public enterprises. The goal is to achieve efficient exchange of information on all hierarchical levels, as well as mechanisms of reporting and performance measurements, such as business intelligence and key performance indicators, taking into account concepts of scalability, availability, ubiquity and pervasiveness. A case study of application within the public enterprise Post of Serbia is used to achieve the goal. The results of analysis show that application of Information and Communication Technologies (ICT) necessarily leads to transformation of business processes that are based on flow of paper documents. In addition, application of ICT leads to standardization, changes in organization structure, and change management.

 SharePoint as an ICT needs major organizational level contribution from participants and there is no unified approach available as of date, which could be implemented to streamline process, in order for a smooth transition. When it comes to surface that this transformation is way too expensive than expected and relatively unsecure, generally it’s too late. There should be a formal study published, to identify these risk factors.

Jali, M. Z., Furnell, S. M., & Dowland, P. S. (2010). Assessing image-based authentication techniques in a web-based environment. Information Management & Computer Security, 18(1), 43-53. doi:10.1108/09685221011035250
The authors analyzed usability of two image-based authentication methods when used in the web-based environment - clicking secret points within a single image (click-based) and remembering a set of images in the correct sequence (choice-based). For direct comparison of usability same set of forty participants (thirty-three males and seven females) were given paper and web based tasks and based on user feedback, these two techniques were evaluated. The results suggest that click based authentication is more secure and choice-based authentication has better scores in terms of usability. Although participants rated the choice-based method as weak, it was still their preferred alternative for replacing passwords. This result suggests that participants preferred "convenience", albeit with an awareness of the "security" risks.

With SharePoint 2013 claim based authentication, it might be possible to insert multiple security layers enveloped under same set of services. Username and password combination along with click-based/choice-based user verification is something we need today. It’s worth a million dollar to conduct usability & technical feasibility study of suggested approach.

Nastase, P., & Eni, L. C. (2015). Developing an online collaborative system within the domain of financial auditing. Amfiteatru Economic, 17(39), 823-835. Retrieved from
The paper discusses technical design for online availability of audit records using SharePoint. The online audit records here means information required by both financial auditors and the employees of the Chamber of Financial Auditors of Romania. This technical design evaluation involved feasibility study and later implementation using Microsoft SQL Server 2008 R2, SharePoint Server 2010, SharePoint Designer 2010 and various implementation features: external content types, external lists, business data web parts etc. Two research methods highlighted in this paper are: the first one is empiric, based on formulating a questionnaire and the interpretation of the results, while the second is the analysis of the implementation process by using a step-by-step approach. The online audit database stores information about the results of previous audits, the opinions issued as result of audits, the results of online electronic inspections, audit firms, audited entities, risks identified etc. The conclusion was that the online database, which is updated through Internet, is feasible to implement in SharePoint, for multiple audit stakeholders including financial auditors who can sell their financial audit services benefiting from the transparency that the system provides.

This article, even though elaborates well the technical design and feasibility of SharePoint and related tools for reporting purposes and signifies use cases where business connectivity services may be leveraged. One of the most important concerns is untouched here: dynamic nature of reports (if required) based on business rules for multiple users using same platform. This must be addressed in a separate paper, considering the fact that when a solution is implemented it must cater future needs and at the same time this flexibility should not open new security loopholes.