piątek, 30 listopada 2007

CardSpace xor OpenId - trends

Mood: My skis are packed, holidays incoming!

About CardSpace
CardSpace (formerly “InfoCard”) is the solution introduced within .Net Framework 3.0 side by:
- Windows Workflow Foundation (WWF)
- Windows Communication Foundation (WCF)
- Windows Presentation Foundation (WPF)
It is Microsoft’s standard for authentication and digital information transport. MS did in here 180 degree turn. Predecessor, Microsoft Passport was central type of the solution, where Microsoft severs possessed all the information about authenticated users. It did not become popular outside Redmond servers. Now, the user is the one who possess the information and the one who decides which information (Information Card) from his personal PC are passed to which web site. CardSpace is so the distributed solution, where MS provides just the standard of exchanging data and it has much bigger chance of success.

How it works, it is described in details on many, various pages to name just the one.

Fig1 – CardSpace model
If you know polish, you should also read this.

Personal Information Card – first steps
CardSpace defines two types of Information Cards:
- Personal Cards, which contains the standard, defined by MS set of information
- Managed Information card (use the package if you want to play with them)
At the beginning it is suggested to play with Personal Cards and install your personal one like it is described in here. Important note is though that you do not need Vista and it is working also for XP SP2 and Windows 2003 SP1.

You may register your card in WindowsLive or MyOpenId. MyOpenId is the interesting authentication mid-service for number of other web portals controlling your “sign in” action there with one assign id. After you sign in on these services for the first time, you are transferred to MyOpenId page and you may decide about your nickname there plus you are asked if you want to allow the action forever, once or deny it.

Fig2 – MyOpenId service

Nevertheless, after registration in both services you may transparently get to WindowsLive service, but MyOpenId will require from you sending your Personal Information Card during each Sign In action! The second situation is probably due to other model and high security level as any changes in MyOpenId affects the way of accessing number of other.

After these and the other actions you may always check usage history of your card.

Fig3 – History of Personal Information Card

CardSpace enabled Web page
When you want to create your own web page, which is CardSpace enabled (like MyOpenId) there is couple of tricks like:
- You need to create and install your some “X” High Assurance certificate issued by “Y” CA to enabled SSL (*.pfx file)
- Add “Y” CA certificate into list of Trusted Root Certificate Authorities (*.sst file)
- Specific page for choosing, displaying and registration the Personal Information Card
- Authentication mechanism for your page
All these things are described in details in couple of places, just to name one.

Further more Fireworks 3.0 (likely) supports CardSpace.

OpenID
The other standard on the market is OpenID by OpenID Foundation. You may get and use your OpenID in various services enlisted on the page. OpenID is more widely distributed standard than CardSpace, but… Microsoft is not willing to compete with it, but they has announced they are willing to integrate! Furthermore myOpenID mentioned above seems to be already the service integrating both possibilities CardSpace and OpenID.

If you are willing to use the standard you should visit the OpenID source page (OIDS), where you find all necessary downloads (documentation is not available though) and information about on-going projects. Generally there are two ranks of participants:
- OP : OpenID providers (the plan is to have at least one per country, but the coverage is currently quite poor) – it is counterpart of MS’s IP
- RP : Relying party (the users of OpenID basing on OIDS license – for example web site owners)
The whole idea seems to be a good, open campaign to organize chain of trust between the identity providers and web sites owners, to avoid the uncommon phishing attacks like the one described in here. There is quite intriguing business model presented by the organization:

Fig4 – How OIDS works

Future
It seems like OpenID is a step forward CardSpace as it starts to build already its net of providers, users and partners. If you write in Google “How to become Identity Provider”, among 10 first findings, there is no information about CardSpace or any other Microsoft service, but there are two linkages (VeriSign and Public OpenID providers) linked directly with OpenID enterprise. If there is not some visible promotion of Information Card providers the whole idea may end in a similar way to Microsoft Passport – it will be used mainly on Microsoft pages J

The other problem is that there is not too many visible places where you can get the Managed Information Card. Even the one, which you can find on the web does not work often, like the one. Furthermore even basic Personal Information Card does not work in some web services, like SignOn. Funny thing is that it works for some people or it worked in the past. Of course, the error might be (and probably is) in web site, but annoyed non-techie people may blame CardSpace.

The interesting option though for Microsoft is to provide “presentation layer” and use the OpenID as a “transport layer”. As there is cooperation planned for both and myOpenID already goes in that direction it is quite possible that future Windows (Vista) user my authenticate with CardSpace to get widely distributed and popular OpenID in the network.

piątek, 23 listopada 2007

Badmail coding adventure

Mood: Saturday party - heeeeaaaar I come !!!!!
Soundtrack: "Monkey dance" remixed (no comments)

Introduction
In one of the projects, we had a problem to ensure that the sent email, was received. As it is easy to guess there is number of reasons, why the thing may fail and one of the most obvious reasons is the wrong email address. The standard SMTP mechanisms does not provide any solution as it is assumed that is more Mail Server problem, than a protocol itself. I use the standard IIS SMTP preconfigured in well known way, but there is still couple of ways, how you can deal with a problem.

First of all you need some simple application to send the emails with use of SMTP service. To save the time you may use one of the application available on the web again, like the one. Then there are following options…

Custom headers
There is quite cool service on the web describing how the System.Net.Mail library works and at one of the paragraphs describes that

By adding custom headers, we can tag email messages with information that may not be visible to the end user, yet still might be important to us. This is especially useful in tracking bounced emails.

Well this is pretty close to what one need. Adding invisible header and reading it is not a problem with one exception… Where will we get the information that the email is bounced?

Badmail
This is a specific folder configurable in Messages>Badmail directory (by default C:\Inetpub\mailroot\Badmail), where all non-send emails are delivered after the predefined number of unsuccessful tries. You may easily test it sending the email to non existing client and reseting IIS service – the email will appear in Badmail directory immediately (usually you must to have a while).

Now, each bounced email will generate 3 files in fact:
- BAD mail which is the bounced email packed with leading message and delivery status
- BDP, which contains binary information related to the file
- BDR, which contains information about why the file was not deliverable

In fact what we need is the BAD file analysis – the plan minimum is just look for some unique name of custom header and check its content defined during sending the email as the error message ready to display.

Well… that is not elegant solution, so it is much better to parse fully the BAD file.

MIME reader
Lastly, on CodeProject there was released .Net POP3 MIME Client library, which is ready to use library – plug & play. You just need to make one small change in MIMEReader library and change one of the conditions in ParseBody:

//Parse a new child mime part.
[…]
else if (string.Equals(_entity.ContentType.MediaType, MediaTypes.MessageRfc822, StringComparison.InvariantCultureIgnoreCase)
&& ((_entity.ContentDisposition == null)
string.Equals(_entity.ContentDisposition.DispositionType, DispositionTypeNames.Attachment, StringComparison.InvariantCultureIgnoreCase)))
{
[…]


Now you are ready to create your own function to parse all bad files from bad folder…

public static ArrayList GetShortListOfBadmails(string badMailFolder)
{
ArrayList result = new ArrayList();

//Get all BAD files in Bad emails folder
System.IO.DirectoryInfo dir = new System.IO.DirectoryInfo(badMailFolder);
foreach (System.IO.FileInfo f in dir.GetFiles("*.bad"))
{
System.IO.StreamReader s = f.OpenText();
string[] strArr = StreamToArray(s);

MimeReader mimeReader = new Net.Mime.MimeReader(strArr);
MimeEntity mimeEntity = mimeReader.CreateMimeEntity();
//MailMessageEx mimeEmial = mimeEntity.ToMailMessageEx();
/*
* Children[0] is the leading message (text)
* Children[1] is the "Reporting-MTA: dns;akoszlajda-lap.pwpw.pl" message (delivery status)
* Children[2] is the email which was supposed to be sent (rfc822)
*/
int i = 0;
bool rfc822Found = false;
for (i = 0; i < rfc822found =" string.Equals(mimeEntity.Children[i].ContentType.MediaType,"> 0))
{
MimeEntity nonSentEmail = mimeEntity.Children[i].Children[0];
try
{
string strErrorDate = mimeEntity.Headers.GetValues("Date")[0];
string strTo = nonSentEmail.Headers.GetValues("To")[0];
string strEmailDate = nonSentEmail.Headers.GetValues("Date")[0];
string err = strErrorDate + " - Mail wysłany do " + strTo + " " + strEmailDate + " nie został dostarczony";
result.Add(err);
}
catch
{
//if there is lack of the information do not do anything
}
}
s.Close();
}

return result;
}


Other option
Optionally the problem with bounced emails may be send to specified email address and you may parse the emails there using the .Net POP3 MIME Client library, but you must somehow filter that email is the report about bounced email. That may be a problem.

Conclusion
The solution is very narrow and it is rather the proof of concept. There are is a number of other reasons why the email is not delivered (like the target email box is full) and the solution was not tested with Microsoft Exchange, but there should not be a big difference as the system has also Badmail folder mechanism.

On request, the zipped solution may be provided.
PS.
I know that there are some significant obstacles with Badmail installation on Windows 2003 using standard SMTP! I am not quite sure if it is impossible or there is some specific workaround.

sobota, 17 listopada 2007

Domino Day 2007 - behind the scene

If you ever though you have taken a part in a big project - have a look at this. This year the show happened at 16.11 in special 9500 m2 big building space. So far the list of records was...

Year Details: Where: Toppled:
2006 Domino Day: Music in Motion Netherlands 4.079.381
2005 Domino Day: Theatre of Eternal Stories Netherlands 4.002.136
2004 Domino Day: Challenge Netherlands 3.992.397
2002 Domino Day: Expressions for Millions Netherlands 3.847.295
2001 Domino Day: Bridging the World Netherlands 3.540.562
2000 China & Japan & South Korea China 3.407.535
2000 Domino Day: Reaction Netherlands 2.977.678
1999 China & Japan China 2.751.518
1999 Domino Day: Europa ohne Grenzen Netherlands 2.472.480
1998 Domino D-Day: Visionland Netherlands 1.605.757
1988 Europe in Domino Netherlands 1.382.101
1986 KLM Domino World Record Netherlands 755.836
1984 Team Klaus Friedrich Germany 281.581
1980 John Wickham and Erez Klein Japan 255.389
1979 Team Alistair Howden New Zealand 255.000
1979 Team Michael Cairney UK 169.713
1979 John Wickham and Erez Klein USA 135.215
1978 Team Bob Speca USA 97.500
1977 Team Michael Cairney UK 33.266
1974 Team Bob Speca USA 11.111

And this time first and 4th builder challanges failed and mostly because of that the final score was just (!) 3.671.465. Some of 90 builders (20 from Poland!), who set up all together around 4.500.000, did not see ANY effect of their hard, 2 months long work!

Actually there is the whole Weijers Domino Productions company, which is dedicated mainly to make the show once per a year and broadcast it around the world. In 2006, they gathered 85 mln watchers in front of TV. In the company, there works 17 people (builders are gathered just for 2 months - they work just for fun) and they are splitted into the departments - management, domino design/physics/production plus ... domino development preparing the projects for show with usage of dedicated software solution. You can have a look how the sample project is created. Starting from it imagine, how much work had to be done to make this master plan for Domino 2007.

Did you took ever a part in such a big thing?

czwartek, 15 listopada 2007

fxCop story

It is quite often, that companies posses their own “Rules of coding” document like the one from Philips, which is more or less the compilation of documents available on the net. I strongly suggest also reading "Framework Design Guidelines", which is my bible in the area. One thing is to write the paper and the other is to have it truly implemented. Some companies have the process established that none code may be checked into repository without the code review performed by the other developer, but that is open issue is if this process ensures that all rules are always followed and exceptions from obeying rules are registered. That is why, it is always nice to have the cyber cop, who will check once per a while if codes in your repository follows the rules established in written documents. The cop has even a name on this t-shirt and heeeeeaaaaar heeeeee issss - fxCop.

Each project is different and each time you must select, which rules will be used. I think that the best way to deal with the issue is the following:

1. Create fxCop project
Get all dlls and exes which will be analyzed and add them in target section. Analyze them and save the whole project.

2. Filter sets of rulesFor example, if you are not planning to localize your software you do not need Globalization Rules at all. In similar way, if you do not have the legacy problem, you probably do not need the Interoperability Rules.

3. Filter rules
Check each rule. More errors is in rule, in more details you need to analyze what exactly the rule is about. In some situation fxCop provides you the link directly, in other situations you must look at help or ask MSDN manually about the rule. You will find quickly that some rules are useless for you like (for me):

  • EnumsShouldHaveZeroValue – we use enums just internally and they are not public
  • AvoidNamespacesWithFewTypes – the project is at early stage and further extensions are planned shortly
  • MarkAssembliesWithClsCompliant – we do not need to be compliant with CLS

And some of them are golden one (for me):

  • IdentifiersShouldBeCasedCorrectly – priceless thing, which usually would take me a lot of time, if performed manually
  • AvoidUncalledPrivateCode – it helps me to find quickly unused code
  • DoNotCallPropertiesThatCloneValuesInLoops – it never should happen

Except checking MSDN, if that is necessary open the code and check where particular error happens in code. It will happen often that only looking at code, you can answer if the problem is dummy or real.

Analyze the code again and check how much work there is to standardize the code. If there is too much, than you can afford you need to repeat step 2 and 3. If you can, talk with your developers to ensure, that your point of view is valid.

4. Look at errors sets and ensure, that they are valid
Forget for a second about fxCop and how cool it is and try to put yourself in developers shoes. Are enlisted errors really important; do they really need to spend a time and risk the repair (which may cause additional errors)? Remember that you should implement the whole thing incrementally to convince people that it truly empowers they work. If you start with too many rules at the beginning, you will cause people to make refactoring for months, what will not make them and your boss happy ;) That is why, it is good to “thing big and start small”.

Filter the last time, the list of suggestions for developers.

5. Establish the process
The tests have been performed for the solution, which consists of 62 projects and has around 30k LOC.

Option A
Run the report from command line manually. The default xml report is parsed nicely by default by IE – options like expand/collapse are possible there without any configuration. You may generate the CSV in the same way, but in fact that is the same xml (just extension changes). The disadvantage is that you need to run the whole process manually and you need also to update the list of targets as the outputs (dis)appears, but the plus is that it does not disturb the daily work of developers. The batch like:

fxcopcmd /p:"..\..\..\_fxCop\ESPO1v0.FxCop" /c /o:".\FxCopReport.xml"

Took me about 1:21.

Option B
The other idea is to apply the fxCop process as after build for Release version of the product (“Adding FxCopCmd to Your Build Process”). In this way, daily Debug builds are quick and when everything is ready key developer does Release build and gets immediately the report of things to repair. Unfortunately post-build actions are available on the projects level and not on solution level, so you must either apply it to each project separately or once to project, which is build at last in the whole solution, if you are sure it will be always compiled as last one (Project Build Order). In this case, you must be also sure that each person:

  • Has fxCop installed
  • It is installed at the same place or path is set properly
  • fxCop project is downloaded into the same place at each PC (shared folder possible)

FxCopCmd /p:"D:\SVN\ESPO\fxCop\ESPO.FxCop" /c /o:"D:\SVN\ESPO\fxCop\FxCopReport.xml"

The other option suggested in here is to use the VS 2005 add-in and link it with MSBuild process.

Option C
Even more interesting option is to add the action as a post-build in any project. You need to change the fxCop project and clean Targets tab. You may establish that folder, where fxCop has been installed is added to the Path global variable and common folder with fxCop project and where all the reports will be stored. You need then to ensure that post-action is added to each project to have the whole thing working

Fxcopcmd /f:"$(TargetPath)" /p:"..\..\..\_fxCop\ESPO1v0.FxCop" /c /out:"..\..\..\_fxCop\$(ProjectName)-fxCopReport.xml"

You need to know, that my sample solution has compiled in 32-45 seconds without fxCop and in 8:10 with fxCop for option C (even when just 5 rule sets were turned on)! Only iIf you don’t have too many projects, it may be the good option. In other cases option A or B, will be probably better.

Whichever option you chose, there has to be some negotiation stage (especially at the beginning until the set of used rules will not stabilize), when developers may give the opinion about the usability of the particular rule or changes. When it is established and the set of rules is redefined, it must be double checked after some time if the errors are repaired

You may also consider at later stage to write your, customized rules, but you always must consider if it really safes time ;)

Conclusion
You must spend some real piece of time to make the whole thing to be REAL, but I have a strong believe that the automated effect is usually worth of it especially if you the project is:

  • long term
  • open source or you need to pass the rights to source codes somebody else
  • the code is analyzed externally
  • to which the others may write plug-ins
  • developers switches

Of course you must review the whole thing once per while and redefine the set of used rules (make the tolerance more narrow or wide), but the even the knowledge about the post-build process will motivate developers to write better code.

At least, but not at last that is important to mark, that automated process is just the support and not replacement for manual code reviews.

poniedziałek, 12 listopada 2007

Google Reader and other Google toys - quick review

As most of people I am registering RSS feeds, when I have some time to spare and than usually not enough time to read them frequently. I am after a romance with RSS Bandit, which one of the best standalone RSS readers (in my opinion). Then I have tried to use RSS reader emedded in IE 7 and Outlook - what a creepy implementation it is (bleh). Now I am looking at Google Reader used by one of my friends and I must admit I am positively surprised.

The cool things, which I truely like about the solution are:
  • Very clean GUI - e.g. there is one box where you may add the URL to RSS feed or the keywords to look for
  • Very cool (un)read mechanism - e.g. after WATCHING the lead of news it is marked as read; when it happens the first time you are warned about it (!) plus there an info that it is just the default behaviour. Amazingly simple!
  • The real juice of the whole concept is the share button on the bottom of each news - it allows you to select news, which you may provide publically on your own, dedicated page (all provided by Google). It may look like this
  • Except the share thing there is also couple of other useful things on the bottom of each feed, what makes them easy to access and truely user-friendly

  • Trends - that is quite simple, but nice statistics module showing how much content is coming from which source
  • Good administration model - there is totally new section called "Manage subscriptions", where you can make a number of things, which you are not doing daily - set up the outlook of the whole service, settings and (the most interesting) import/export your feeds using all good friend OPML. Here finally I have found the description of the whole thing and all the answers for questions asked before.
  • Itegration with Google Blog - on the right side you may see widget, which was created automatically from Google Reader
Google vs MS - 2:0 (the first point earned, after the blog platform was chosen ;) )
ps.
BTW, I have rediscovered the Google Toolbar Button Gallery, which allows you to highly customize your Google Toolbar (if you do not have any it is the time to download it from here). Quite often I willing to search the particular phrase just in Wikipedia and the most popular Google Toolbar button does just this. There are also specific buttons for CodeProject, Google Books Search and big number of the others like Google Calendar.

sobota, 10 listopada 2007

Marriage of Prince2 with Microsoft Solution Framework

Very often there pop-ups the question like: 'What of methodologies do you know/use?'. There is usually some smart answer like PMBoK, RUP or SCRUM, but when the question is drilled down, it appears out of the blue that there is just the set of unique practices - the mixed salad from everything what market offers and world wide web says about. There is nothing wrong with it and furthermore it is a wise strategy to collect patiently the practices, which truely works. Wise senior executives will slow down the saint paladins, who coming back from the training are trying to change the whole company into the Six Sigma or whatever else. Of course this type of change is possible, but it must be implemented incrementally and it must respect existing well-working practices, even if they do not fit to the template.

For a long period of time I have been using (or I have tried to use) the MSF and I am quite used to it. Of course, there had to be some compromises each time and it was never the pure implementation. For example just once I had the situation where there was the true separation of the project and product manager roles.

Nevertheless at my current position, there is a strong rule across the whole department to use the Prince2 methodology. This type of situation, common and strong support from top executives, does not happen often. I am currently on my learning curve of the methodology and I must admit that only things, which you do not know scares you. Furthermore, it seems to be a good fit to treat a Prince2 as a general flow of documents on higher, execuitve level, where written documents are necessary to take some serious, strategic decisions and MSF as a parallel, technological flow on a tactical level.

The main flow of the Prince2 is more-or-less as follows:
  • [DOC] Project Mandate
  • [PROC] Starting up a project (SU)
  • [DOC] Project brief
  • [PROC] Directing a project (DP)
  • [PROC] Initiating a project (IP)
  • [DOC] Project Initiation Document
  • [PROC] Directing a project (DP)
  • Stages after stage
  • [PROC] Closing a project (CP)
  • [DOC] End Project Report

DOC : Document
PROC : Process

DP is mostly about decisions of a steering committee : 'do we continue a project or not?'
There can be any number of stages, one after the other

Stage is in fact the mixture of the four processes: Controlling a stage (SG), Planning (PL), Managing Product Deliver (MP) and Managing Stage Boundaries (SB), but to simpfly these things I called the thing Stage.

The Prince2 methodology definitely causes formalization of the project, but in case of a big nterprises that is actually a good approach especially at the beginning and at the end. After the key decision is taken each stage of the project may be passed to MSF solution, where we already have the Vision phase managed and there is a time for all the remaining 4 phases – planning, development, stabilization and implementation. These are strongly technical activities, which are not in scope of Prince2. The best sample is a planning phase, which has totally different meaning. In Prince2 that is about the way how the plan is constructed: what will be delivered (products), what are the linkages between products, risk management, estimation and master plan. In MSF it should be actually called design and that is splitted on conceptual and technical planning of the architecture.


At the end of the whole project the deliverables (work packages) provided by MSF, should be checked within the Prince2 Controlling A Stage (SG) process or Closing a project (CP). This double checks that final result goes along with expected and defined within Project Initiation Document.

Assuming the business case of the big company A, which request the delivery of the big software product X and a smaller company B, which is a pure software house. The company A may use Prince2 and company B the MSF, without bigger problems in communication. Actually Prince2 itself predicts that company B may use different methodology (not Prince2) and it is a main purpose of the whole Managing Products Delivery (MP) process.

Furthermore the whole concept is not new, when we speak about Prince2 and RUP (I really found the link after writing the first version of this document) ;)

sobota, 3 listopada 2007

How to be happy?

My manifesto says that this place is a techblog, but at the end of it I have hidden the option, that under specific circumstances each or all the rules may be broken - here we are. I can not stop myself from writing you below the 10 rules of happy life wirtten by psychologist PhD Krzysztof Szymborski, which I have read in "Polityka" newspaper:

1. Do not avoid sex - inimate closeness with the other person increase our selfopinion
2. Turn off TV
3. Smile - even forced smile makes you feeling better
4. Call to a friend
5. Delight nice moments
6. Walk 2 km with a fast tempo - medium physical excercises favour the feeling of euphoria
7. Take the job, which you like and during your free time engage in social activity - became a member of choir or plant a tree in public square
8. Enjoy from what you have
9. Each day make yourself a pleasure and give yourself a time to really enjoy it
10. Each day try to make something good to somebody else

It is amazing. Pure, solid and complex, huh?
I need to implement this design pattern ;)
 
web metrics