• Categories



  • Archives

  • Internet Explorer, HTTPS and file downloads: “The file could not be written to the cache”

    Published August 17th, 2011

    [Please see update below for a proper fix]

    Today I received an email from a client that had just moved their site over to SSL (i.e. HTTPS) — they were no longer able to download files in Internet Explorer. When they tried to do so, a message would appear:

    Unable to download...[file name]

    The file could not be written to the cache

    (an alternative error message is “Internet Explorer cannot download [file name] from [site].
    Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later.”)

    I tried various suggestions: different response headers, settings to do with saving encrypted files, and even tried (and failed) to install a couple of hotfixes.

    In the end, I’d nearly given up when I found this knowledgebase article and decided to have a go at manually adding the BypassSSLNoCacheCheck registry key.

    Short story, it worked. It’s not great since it requires the user to do something — I couldn’t fix it server-side — but it was OK in this case because the site was an intranet so their IT people could sort it. Anyway, I hope this helps someone else out there…

    UPDATE: OK, after a lot more experimentation I found a proper solution that doesn’t involve registry updates!

    It’s a long story but it turns out that PHP was automatically sending an extra header (“Pragma: no-cache”) that I hadn’t noticed. So, I set the headers as follows:

    header("Cache-Control: private");
    header( "Pragma: private" );

    …which overrides the automated headers and appears to solve the problem. Hurray!

    Using .htaccess to suspend an entire site while still working on it

    Published March 3rd, 2011

    Sometimes, when you’re working on a web site, you need to make some changes that could cause problems if there is someone else using the site at the time. The obvious solution is to suspend the site — e.g. put up a ‘this site is down for maintenance’ page — but how do you do this while still being able to view and work on the site yourself?

    In fact, it’s really quite easy, using .htaccess. Simply put, you use mod_rewrite to redirect all requests to the maintenance page, unless the user’s browser has a particular cookie.

    RewriteEngine On
    RewriteBase
    RewriteCond %{HTTP_COOKIE} !^.*secret-cookie.*$ [NC]
    RewriteRule .* maintenance-page.html [NC,L,R=503]

    Clearly you can change the name of the cookie from ‘secret-cookie’ to whatever you like. Its value isn’t important. And of course, you can change the name of your maintenance page.

    Update: as suggested by Michael, below, I’ve added the R=503 status code to indicate that the service is temporarily unavailable.

    Now you just need to create the cookie in your browser. Personally I use the FireCookie extension for Firebug which makes this very easy.

    That’s it!

    FireQuery: a Firebug add-on for jQuery

    Published February 4th, 2011

    If you work with jQuery, then I don’t think I need to explain why a plugin that shows you any jQuery-related properties of a DOM element would be useful — so go on and get it here. Of course you’ll need to have installed Firebug first (which you should have anyway, duh).

    Generating PDFs in PHP

    Published December 2nd, 2010

    For several recent projects I’ve been called upon to produce output in PDF format. For a PHP coder the difficulty lies not in the task itself, but in choosing which of the numerous PDF generation libraries to use.

    I’ve tried several over the past 18 months or so, including FPDF and TCPDF. TCPDF is a direct wrapper for PDF functions, which means getting your head around PDF’s layout behaviour — powerful but rather time-consuming. TCPDF does the same but also provides the ability to import HTML documents — much easier, but unfortunately not as stable as one might hope: in particular I had problems with elements (e.g. tables) that spanned pages.

    For my most recent project I’ve been experimenting with dompdf, which does away almost entirely with the notion of directly interacting with PDF layout in favour of attempting to provide more robust and flexible HTML import. As such it has impressively advanced CSS support, as well as excellent handling of tables and other markup. The documentation is somewhat lacking so in some cases it’s necessary to dig around in the support forum to find out what you need. But so far I’d say it’s by some margin the best of the libraries I’ve tried.

    On redirects, spiders and security

    Published July 13th, 2010

    Recently I’ve been working with an agency that has its own simple PHP web site framework. During the course of working with them, a problem arose: pages were disappearing, apparently without human involvement.

    With some detective work they had discovered that somehow the ‘secure’ CMS part of the site — where the client can log in to make changes — was being crawled by automated search engines. Part of the CMS is a list of all the site’s pages, each with links to the usual operations — edit, delete etc.  When the spider was indexing the pages, it also accessed the delete link, thereby deleting the page (much like this DailyWTF story).

    Oops.

    Anyway I took a look and while the security wasn’t great — it was based around cookies with no server-side validation — it still seemed odd that the spiders were able to access the pages. I implemented a slightly more robust system using sessions, added an entry to robots.txt, and marked it as solved.

    And then it happened again.

    I couldn’t work out what was going wrong, so to stop it happening I converted all the delete links to forms. But it was nagging at me — how was it that the search engines were reaching the pages at all? Why weren’t they being rejected when the security script checked for a cookie and session?

    Finally, the penny dropped…

    The security check worked by looking for a valid session, checking it for a ‘user is logged in’ value, and if one wasn’t found then sending a redirect header pointing to the login page. Nothing unusual there. So what was going on?

    When PHP sends a redirect header, the browser says “OK, I’ll go to this other page” and the user’s none the wiser. But just because the browser is no longer listening, that doesn’t mean the script automatically stops running. In fact, unless you tell it to stop it just continues as if nothing had happened. Thus, the spiders were simply ignoring the header and receiving the page as if there were no security in place at all.

    The solution? Add an ‘exit;’ after the redirect header. Simple!