Friday, January 7, 2011

Progression

I've been struggling with one aspect of Sharepoint for over 4 years now, and have implemented many different solutions to deal with it, and over time have come to understand what the problem is at a fundamental level.  The problem is quite easily defined, and one would be forgiven in thinking that a solution would be equally simple: how do you present a custom-styled view of dynamic data in a Sharepoint page?

The problem of course takes on certain attributes as you go deeper, and there are a few caveats.  For one, let us assume that we cannot simply create a new web part, nor can we extend an existing web part.  We are only able to use the Sharepoint front-end (which is not uncommon).  Also, we need to be able to use a WYSIWYG tool to put in the actual content.  So if we are displaying a stylized news feed of the top X stories, the news items are added into the news list with a WYSIWYG tool.  It's not so important if you think of the presentation as a table-view of titles and dates, but it becomes critical when you think of a carousel-view, with rich imagery and stylized text for each item (with a link to read the full article).

When I first started out, I did my research and correctly arrived at the conclusion that the content query web part was what I was looking for.  However, I came to realize that this is a very difficult web part to figure out.  For one thing, it is not clear how you go about customizing the XSL, which I later found was coming in from three different files.  When I realized that I needed to edit the ItemStyle.xsl file, which is shared across the whole site collection, I gave up and decided that this approach was not the right one.  It was not clear at this time that one was able to change the web part to direct it to your own versions of these files (and at the time I would not have had the XSL knowledge required to do it anyway).

I then explored the possibility of using the XML reader web part, and this remained my method of choice for a couple of projects.  I was able to retrieve the RSS feed for the lists inside of this web part and parse the XML to render the items that I wanted.

The problem with using the RSS feed is that it is, well, an RSS feed.  For some inexplicable reason, the RSS feeds that MOSS generates don't actually make use of XML at all.  Below is a snippet of one such feed, which is pulling items from a list containing some custom fields:

Original list - Note the column names

RSS - Try to find the column names in the "XML"
What you can see above is that MOSS doesn't actually present the column data as XML - it creates a "description" tag and then just dumps all the information into that tag.  What's worse is that there is no reliable way to differentiate the column headings from the content at all.

There is an additional problem with using the RSS method - there is no authentication.  MOSS requires that the RSS have anonymous access, which means that if you are just using it to shuttle dynamic data between pages on the site, your content is visible to anyone with the URL.  In most cases this wouldn't be a problem, but it is definitely a security hole in terms of content.

It became apparent that another solution was needed.  At this point, I revisited the content query web part, still convinced that it was the best solution.  Eventually, I came across a few very illuminating articles, which outlined exactly what I was trying to accomplish:
I had a huge epiphany when I realized that it was possible to redirect to custom versions of the 3 XSL files that the content query web part uses.  However, it is not actually accessible through the MOSS interface.  You have to export the content query web part to your computer and open it in Notepad++ or an IDE.  After editing the web part, you have to upload this back to the page using the MOSS interface.  There are fields in the XML allowing you to specify alternate locations for the 3 files (MainXslLink, ItemXslLink, HeaderXslLink).  HeaderXslLink is not that important.  However, the other two are vital.

The problem with the web part in general terms is that it is similar to the RSS feed: the data is not presented as XML; there is just a dump of all the content fields into a "description" tag.  It's inexplicable as a design choice.  But there are ways around it, as I found.  Firstly, I had to remove the table layout completely.  To do this, one has to manually edit the XML in the MainXslLink file and alter the grouping mechanism.  Basically, strip out everything to do with creating table, tr, and td tags, which is actually quite involved (most of it is in the OuterTemplate.Body template).  Then, you have to alter the OuterTemplate template to specify your outer div and optional ul.

The web part file that you edit also has other fields, which allow you to select which fields from the list you want displayed (CommonViewFields) and allows you to rename them (DataColumnRenames).  CommonViewFields is tricky to use - you need the exact column name but you have to remember that spaces are converted to "_x0020_", and other special characters are converted similarly.  You also need to know what the data type is, although you are able to specify Text and get back a textual representation for some fields.  There is a convenient list here.

This brings us to ItemStyle.xsl, which is the real beast.  It controls what happens for each record that comes up in the list.  If you chose to wrap all this into a ul above, you can generate a li here for each item, or you can use div instead.

I found out the hard way that you have to remember to escape all sensitive characters like &.  Your XSL should also be perfect or the web part will simply die and not tell you why.  This is because there is no way for us to communicate with the parser.  To get around this somewhat, I used Visual Studio, which is good for debugging XML and XSL.

After all this, I was able to pull out the data presented in HTML that was able to be styled through CSS.  Another benefit was that this web part is able to be cached, saving us some time on page load.  At this point, I channeled my inner GW Bush, Mission Accomplished!

I was able to use this approach for a while for projects, modifying it as I went to learn how to do things like parsing dates into custom formats, varying actions depending on the value of the field, etc.  My XSLT came a long way over the course of this period.

Side note: I use XSL and XSLT almost interchangeably, as most people do, since there is a lot of ambiguity on the terminology.  Please disregard this if you have a different preference.  I use it mainly to refer to the (platform-independent) technique of transforming XML to another format (in this case, HTML).

Then one day I received an oddball request to combine the content from two separate lists into one styled content area.  For this project, we had one main site containing three lists, which stored HTML content that was shared amongst the 50 or so sub-sites.  It was broken up into sections, defining how they appeared on the page.  Each of the 50 or so sub-sites had their own versions of these lists, where they were able to append to the various sections of the page content.  The global content always appeared first, and the two shared a mutual section heading.

I was using two content query web parts to pull in the two pieces of content (there are no filter web parts available, so I had to generate all these manually and upload to the page).  However, how would one combine the two lists' content into one contiguous piece of HTML?  I had to resort to javascript.  We needed to use javascript anyway, to allow show-hide type of functionality.

So I was using jquery and javascript to read in the content and to store the content in a huge array with a hashed section heading as the key.  I needed a separate array to store the positioning information of each section, but it finally worked.

Here is the dirty secret: Up until this time, I had not actually made full use of the custom XSL files. I had only succeeded in removing the table-layout, but I had not been able to think of how to actually make everything be easier to parse via javascript (there had just not been any need).  As a result, on this project, I was using some really heavy javascript to parse all this, and it meant that the HTML had to come in perfectly (which is difficult to ask for when the content is to be maintained by the client via an unreliable WYSIWYG tool). During the course of this project, I realized that I was able to use CommonViewFields in my rewired web part to put in attributes and tags that allowed me to manipulate them via javascript directly.

I was now able to use the content query web part to its full potential by presenting the HTML in a manner that would let me manipulate the content via javascript as I saw fit.  However, I was still waiting for the content to fully render as HTML and the document to be in a ready state before I would then manipulate the HTML into different HTML, invoking a lot of DOM manipulation.

There is also one fundamental flaw with the web part.  It is for some reason, unable to pull in values specified for the "lookup" field, if you have set it to have multiple values.  This is important because you are able to set up one list for managing, say, "Categories" for a blog, and then use a field of type "Lookup" in another list (or blog) to specify as many categories as you want.  The content query web part will not retrieve any, which I found out after a lot of searching.  It is a problem with the web part itself and it has still not been fixed.

It is a big obstacle when you are trying to allow the client to edit their site as they want, but in order to maintain the categories, they need to go into the list settings and mess around in there (which always leads to bad things).  The bad thing about replacing this lookup with just a "choice" type field and putting in the choices manually is that the choices are not linked via ID, so if you change a choice or delete it, it will not automatically change in the existing list items.  If you changed a linked list's item though, this would propagate if you were using a "Lookup" type field.

This is when I stumbled on a technique I had previously ignored as being inadequate.  A mad genius of a programmer that had come before me onto a project had used SOAP to query the database directly to retrieve data from a custom list.  At the time, I had failed to understand how it was possible, and I found SOAP in general to be too cumbersome to use, especially when its 'interface' was being implemented from scratch in javascript.  I eventually decided to re-visit the script, and eventually stumbled on the MSDN articles outlining how to query the database directly, and it opened my eyes.

After trying unsuccessfully to come up with a nice way to encapsulate all this, I stumbled on to a script by another genius.  This is what I am using now.  All the mucking around with XSL has been thrown out of the window, as I can now just use javascript (asynchronously at that) to retrieve the dynamic content whilst serving up the other static HTML content.  It really is something to be able to behold something like this after struggling for 3+ years in search of a solution.  I only use it to query custom lists for items, but the script is actually able to do pretty much anything you could do on a page, including pushing items to the database.  You don't really need to ask the use to submit a form at all in the traditional sense.  You can just use javascript to send their form info to the correct location and save yourself the trouble of manipulating MOSS so that it doesn't show the user the full list-view or having to manipulate the form-view pages at all.

Thus ends a 4 year struggle to find a way to present dynamic data to the user from within a MOSS page.  Today, I discovered that I can use this script in a pure HTML page that I manually upload to any "Pages" folder in MOSS to present content entered through a WYSIWYG editor in a custom list, bypassing MOSS-generated master pages and other markup completely.  With that, I think that I have explored this problem to its fullest, and have just discovered how far I am willing to go to solve something that I think should be solvable.

Labels: , , , ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home