Closed Bug 274784 (blazinglyfastback) Opened 20 years ago Closed 19 years ago

Make back and forward blazingly fast and side-effect free

Categories

(Core :: DOM: Navigation, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla1.8beta2

People

(Reporter: brendan, Assigned: bryner)

References

(Depends on 12 open bugs, Blocks 1 open bug)

Details

(Keywords: perf, Whiteboard: (o72555) Set browser.sessionhistory.max_viewers to nonzero to test)

Attachments

(9 files, 6 obsolete files)

1.36 KB, patch
shaver
: review+
Details | Diff | Splinter Review
316 bytes, text/html
Details
501 bytes, text/html
Details
146.10 KB, patch
bzbarsky
: review-
bzbarsky
: superreview+
brendan
: approval1.8b2+
Details | Diff | Splinter Review
146.49 KB, patch
Details | Diff | Splinter Review
688 bytes, text/html
Details
690 bytes, text/html
Details
164.66 KB, patch
Details | Diff | Splinter Review
1.57 KB, patch
dbaron
: review+
dbaron
: superreview+
Details | Diff | Splinter Review
This bug is where the real work to fix bug 38486 will go on. That bug is a pile of noise and a useless soapbox on which free-lunch-demanders wish to stand. /be
Status: NEW → ASSIGNED
Priority: -- → P2
Target Milestone: --- → mozilla1.8alpha6
Can this bug block bug 91351 163993 as the original bug did? I think it's important for those tracking (though I realize it may attract people to this bug).
bug 91351 seems fine. I'm not marking it as blocking 163993 since it not known whether this idea is a good thing yet.
Blocks: 91351
In the death-bug, I said: To do this properly, I think you have to separate the JS global scope object from the "window" object; which is something I think we should do for security reasons anyway. I don't see why scripting is a big deal; if we're going back/forward to a page which has already "loaded", there is no reason to fire onload events. Brendan then said, "we must fire onload when reloading from historoy, for backward compatibility -- so must every other browser" And Hixie said: opera doesn't do that it's the source of many of our dom bugs (to do with history) but we consider them worth the price of lightning fast back/forward
I could see a pref for fast vs. compatible back/forward being worth the UI, but before conceding that point, perhaps we could do some analysis of pages that break if they don't get reloaded (from cache) on session history navigation (back/forw). /be
Ian, can you give a few examples of URLs that break without onload-on-Back? /be
Not off-hand, no.
Interestingly enough, I just came across a page of discussion on this bug, except in reverse and for Opera: http://www.quirksmode.org/bugreports/archives/2004/11/load_and_unload.html - if you're still looking for an example of a page broken by this functionality, I'm sure you could ask him. It seems as though the Opera folks really haven't found a solution either, they just went in the opposite direction. Would it be possible to simply not load from cache if the page includes onLoad or onUnload events? That would improve performance (storage speed notwithstanding) for most pages, while retaining compatability for javascript generated pages, wouldn't it? Oh, also, does anyone have some some kind of estimate regarding how much storage this'd take?
Is it not feasible to cache the work of the parser? I was under the impression the XUL cache does that. It won't cause any back/forward quirks, and it could be used for all page loads, not just for backing up to a previous page. If an html page is to be loaded from the disk cache, check for an associated parser cache entry. If it's there, use it to load the page.
But I, at least, don't want the parser results when I go back. I want exactly the same DOM tree that I left when I went forward. I'm pretty sure that not-firing the onload even would be the correct behavior in this case; I just want to see what parts of the existing web this would break, and why. Perhaps we need to implement the "onenter" event mentioned in the linked opera thread.
hm, I guess not firing onload would break things that do stuff that's not reflected in the DOM... (window opening... alerts... confirm()... others?)
A lot of sites use onload events or top-level scripts, but most of these sites would not be broken by DOM-storing. For example, a site that just twiddles element positions would not be affected, and a site that prompts for your age while loading or in onload would be *improved* by DOM-storing. The only onload actions that might need to be redone are actions that change external state, such as variables in another frame/window or a cookie. I would expect many sites that use scripts to alter external state to have onunload events that undo those changes. Is that true -- do most sites broken by Opera's DOM-storing feature use onunload? If so, Firefox could use the following heuristic: store the DOM in history unless the site has an onunload event. To summarize the possibilities: A) Store everything. Drop support for onunload. (Opera) B) Store unless site uses onunload. (My suggestion) C) Store unless site uses onunload, onload, or inline scripts. (Comment 7) (A) or (B) could be combined with adding support for onenter and onleave.
What I might like to see is an option to have Back/Forward behave essentially as though each page was in a separate tabbed window, because that is how I have been workarounding the current behavior-- when I want to follow a link from which I expect I may want to go Back, I open the link in a new tab, and then use Shift+Ctrl+Tab to go Back and Ctrl+Tab to go Forward; when a page is outofdate, I do Ctrl+R. When combined with using tabs for branches in the browsing path/sequence, this approach is not ideal, but for plain linear browsing, I find it works great, and so I think having the equivalent behavior just using Back/Forward rather than tabs would be even better. I guess the key thing is that I can be the one who decides the behavior regarding reloading, rather than the browser having to try to figure out what to do for me, when it may not have enough understanding of what is going on at the web application level to make a good choice. I would still want to have an option to have the browser treat Back/Forward as some sort of movement along a path of pages where only one page can be visible/visited at a time, rather than that other model of any page which has been visited may still be visible in its previous state, regardless of whatever pages have been visited since then, and it is up to the user to worry about treating the set of pages as a sequence.
What do you plan to do about plugins? Re comment 11: I agree that (B) is reasonable. However, that makes the assumption that onbeforeunload handlers are idempotent while onunload handlers are not. The way onbeforeunload works (prompt using a returned string) suggests that typical onbeforeunload handlers probably ought to be idempotent. I wonder if we could get away with making the same assumption about onunload. It doesn't seem entirely unreasonable. (How widely used are both of these?)
There's another tricky case. Forms that uses onclick/onsubmit. These scripts could be setting hidden values or make other modifications to formfields before they are submitted. OTOH, these pages are already in trouble since formfields are restored when you go back to a page. But there are actually pages that break on this already. (Can't come up with an example off the top of my head). I think the best way to deal with those pages is some sort of 'onenter' or 'onreturn' event. Also, it would be usefull to be able to tag a formfield (or maybe an entire form) for being automatically reset on this event. This is usefull for things like navigation selects, or seach-fields (it's always confusing returning to a google seachpage and see a value you searched for _leaving_ that page, rather then see the value you searched for _getting_ that page). In fact, this stuff could be independent of this bug. Though if we did store the DOM it would be usefull if the event indicated wheather the DOM was restored or just the form-values. Something for WHATWG, i'll see if I can write up a formal proposal...
Actually, what happens if I leave a page, resize the window, then go back to the page? Is there an onload event? Is there an onresize event? If there is neither, how is the page to reposition things if it does the usual "position it all in JS" thing? As for doing prototype stuff like XUL, it's been thought of.... it might still happen. It wouldn't really solve this bug as filed, though, I suspect.
(In reply to comment #15) > Actually, what happens if I leave a page, resize the window, then go back > to the page? There's an onresize event.
Whiteboard: (o72555)
Target Milestone: mozilla1.8alpha6 → mozilla1.8beta2
Blocks: 203448
I'm not gonna have time for this till 1.9, but bryner has already started hacking on it, and we've been talking. He's the man for this bug, so reassigning. /be
Assignee: brendan → bryner
Status: ASSIGNED → NEW
Hixie, anyone: do any of Opera, Safari, or IE implement an onreturn on onenter event handler as sicking proposed? (Isn't onback the obvious handler name? Hmm, not if you go forward. onhistorymove?) It would be good to build on a de-facto standard, if one exists and doesn't suck. /be
Alias: blazinglyfastback
We were this close back when jband and I last messed around with this odd case. The idea is that you have o1, you want to resolve id in it, but o1[id] doesn't exist. o1's class has a new-resolve hook that defines id in some o2 not == o1 and not on o1's prototype chain. That should work, because all callers of js_LookupProperty* (direct or via OBJ_LOOKUP_PROPERTY) respect the out params. This patch fixes the bug that broke it. For this bug, consider window == o1, an inner window object that's N:1 against the window, where N is the length of session history as o2. bryner, find me on IRC if you run into further problems. They should all be fixable. Thanks, /be
Attachment #177491 - Flags: review?(shaver)
Comment on attachment 177491 [details] [diff] [review] fix so JSNewResolveOp can return an unrelated object In my feverish haze, that looks great! r=shaver
Attachment #177491 - Flags: review?(shaver) → review+
Comment on attachment 177491 [details] [diff] [review] fix so JSNewResolveOp can return an unrelated object Checked in, although this won't be needed for now, if ever, by this bug. /be
Depends on: 285404
bryner's got a patch, it's getting there. Please don't spam this bug with notes about other bugs. Yeah, once bryner's patch lands, and if you go back within the limit of in-memory-cached history items, we will comply with RFC 2616. Woo. /be
Blocks: 288462
*** Bug 288432 has been marked as a duplicate of this bug. ***
Attached patch work in progress (obsolete) — Splinter Review
This is where I'm at right now. There are a number of known issues here: - random crashes, often when closing windows - plugins need to be stopped and restarted - timeouts need to be stopped and restarted - need to fire a new event so that password autocomplete can run when going back - random rendering problem with reload button on mac - the cache size needs to be tweaked, and ideally based on available RAM - need to handle window size changes between saving and restoring presentation - do something with marquee
One source of crashes seems to happen if an iframe has focus when you close a window -- this has to do with the non-sticky case in DocumentViewerImpl::Hide. Working on that now.
bryner, what's the plan for loads that aren't done when we leave a document? Say an image is still loading? Or the document itself is not done loading?
(In reply to comment #27) > bryner, what's the plan for loads that aren't done when we leave a document? > Say an image is still loading? Or the document itself is not done loading? Brian and I spoke about this today. I think we agreed that partially loaded pages should not be put in the back cache since they might compete against the newly loading page for network and CPU bandwidth. Either that, or we should find some way to suspend the loading of an incomplete page... but that sounds very complicated.
> since they might compete against the newly loading page for network and CPU > bandwidth. I was thinking more in terms of sharing the same loadgroup, so affecting onload and the like... ;) I agree that just not putting incompletely loaded pages in the back cache is a simple solution for the time being.
Attached patch patch for review (obsolete) — Splinter Review
I think this still has a few problems. In fact, I'm pretty sure of it. But I think we should get this in, turned on, so that we can start getting crashes isolated with Talkback, etc. and we can have this stable by Firefox 1.1.
Attachment #179734 - Attachment is obsolete: true
Attachment #181003 - Flags: superreview?(jst)
Attachment #181003 - Flags: review?(roc)
This is targeted at 1.8b3, not b2, right?
I'll do what I can but I think bz-review would be invaluable also.
I'd certainly like to see it for b2 so that we can get testing exposure in the Firefox developer preview. Unless it turns out to be so bad that it masks all other problems with the developer preview, in which case we'd be best off getting it in as soon as possible after that release.
I've been using bf-cache builds for days and not had any problems that would warrant it being turned off. This should totally be checked in, without a pref.
> This should totally be checked in, without a pref. Pref'd on to be sure, but without a pref to disable just in case?!? A simple pref that lets me disable bf-cache to see if it is the culprit in some instance would be invaluable IMO.
Some none-substantive comments: Shouldn't a Sanitize() failure cause the DOM to automatically not be cached? It looks like we just ignore the return from Sanitize()... Also, what happens if content fires DOMPageRestore events? Wouldn't autofill trigger on those? Do we need some isTrusted checks here? I'd really rather we stopped putting our custom events in the DOM* namespace. How about Moz* event names instead?
mIsGoingAway is never set for a document with this patch, as far as I can see. That seems wrong. It'll lead to extra work in RemoveStyleSheet() when the document is finally destroyed. It may also be worth seeing when we currently enter GetScriptGlobalObject() while mIsGoingAway is true, since the behavior of those codepaths will change with this patch (in particular, if they actually need a script global they'll break). I'm assuming you did some basic testing and verified that the destructors for the documents you evict from the cache are in fact firing, right? Removing the code that tears down the DOM in SetScriptGlobalObject is a tad scary, but if we can do that without effectively leaking, that's great. Some subclasses override SetScriptGlobalObject and do nontrivial things in it. If you're setting it to null and then back to non-null on the same document, for example, this should cause nsImageDocument to create a new root and call SetRootContent on it, triggering some asserts. Worth checking on this and nsPluginDocument's behavior. If a document is loaded but there are still network requests in the loadgroup (background images, loads started _after_ the onload event, etc), we don't deal at all well, as far as I can tell -- either they end up in the loadgroup of the new document we're loading (which is the same exact loadgroup), or we kill them and probably don't restart them properly on restore. We need to figure out how to do this right (while recalling that at the moment reflow events, say, look like a foreground request in the loadgroup). More comments later....
I'd vote for no pref because we're getting drowned in prefs. If this is checked in _with_ a pref then I would say we absolutely must remove at least two other prefs first. (I'm talking hidden prefs here of course, not UI prefs.)
Blocks: 290394
I was using this patch for two days, and had to go back to default build because of intermittent problems with keyboard focus. Keyboard navigation was sometimes totally broken, and while usually click inside the window helped, this trick didn't work here. Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8b2) Gecko/20050413, GTK2/Xft
I'm using build with this patch for 2 days without any problems. It works great. I built Linux builds with this, and a few folks used it - also without a problem. Today I'm going to build Windows one, to get the feedback from Win users.
Comment on attachment 181003 [details] [diff] [review] patch for review roc said it was fine to switch reviewer of record to bz, who has started already. roc's gonna have a look too. jst is on vacation still, so I'm going to sr. At this point, apart from overt bugs and design mistakes that will be hard to fix the longer we wait, I'd like to see review happen in the next couple of days, so we can get this patch in and get wider testing. /be
Attachment #181003 - Flags: superreview?(jst)
Attachment #181003 - Flags: superreview?(brendan)
Attachment #181003 - Flags: review?(roc)
Attachment #181003 - Flags: review?(bzbarsky)
I'm not going to be able to do a thourough review of this till early next week....
Attached patch new patch (obsolete) — Splinter Review
Issues fixed since the last patch: - Added a null check in the accessibility code so that we don't crash if the document has no container. I don't actually think this is related to my changes but I hit it while testing, and aaronl's recent checkin seems to be the culprit. - Added additional checking in nsDocShell::CanSavePresentation to avoid saving the presentation when we're reloading the page. - Added save and restore of the docshell tree (oops) - Refactored timeout suspend/resume code and made sure to suspend and resume for child windows as well.
Attachment #181003 - Attachment is obsolete: true
Attachment #181463 - Flags: superreview?(brendan)
Attachment #181463 - Flags: review?(bzbarsky)
Comment on attachment 181003 [details] [diff] [review] patch for review removing obsolete requests
Attachment #181003 - Flags: superreview?(brendan)
Attachment #181003 - Flags: review?(bzbarsky)
The patch changes the behavior on this simple testcase. Replace the CGI with one that does JPEG push or XMLHttpRequest activity or whatever to get an idea of where this could become a pretty major issue. The problem, of course, is that a page transition stops all network activity in a docshell's loadgroup. This is desirable, since the loadgroup belongs to the _docshell_. But then when we fast-back we don't restart the network requests we killed.
I'd like to see some documentation in nsDocShell.h. Specifically, I'd like to see documentation as to the lifetime and ownership model of mContentViewer, and documentation on what the three methods you're adding do, when it's OK to call them, what happens if aURI doesn't match the URI in aSHEntry in RestorePresentation(), what's an acceptable SHEntry to pass to RestorePresentation. I'm still sorting out through the subframe navigation handling, so if you actually wrote down somewhere how that works with this patch (given that cloning the session history entries doesn't clone this cached presentation stuff), let me know please... I suspect it works out in some cases because we only actually do a load when we have a different history entry, on the specific docshell the entry is different for, but doesn't this mean that going back to the result of a subframe navigation across a toplevel load will be slow?
More things to check on I thought of... 1) Is there an issue with the cached prescontexts holding a backpointer to the docshell (the "container")? It's an nsWeakPtr, so it's not that it can start pointing to dellocated memory or anything, but could something in the prescontext or some prescontext consumer make a call on the docshell when the docshell doesn't expect it? I seem to recall anchors getting the docshell for traversal via this pointer. What happens if someone fires a (trusted, so say from chrome) DOM click event on a pointer to a node in a document that we've moved away from? 2) Are there issues with the device context being shared by the cached viewmanagers and the current live one? I'm not sure what could arise here; roc might know this part better... So if we go back to a frameset from a non-frame page, do we create new docshells for the subframes? And if so, do they create new device contexts? Or is the device context held by the content viewer?
(In reply to comment #45) > a docshell's loadgroup. This is desirable, since the loadgroup belongs to the > _docshell_. But then when we fast-back we don't restart the network requests > we killed. I'm going to post a new patch in a minute that avoids this problem by not caching the presentation if there are any active requests in the load group (other than the request for the new document). That way we don't have to figure out how to restart something like an XMLHttpRequest. (In reply to comment #46) > them, what happens if aURI doesn't match the URI in aSHEntry in That would be bad and should not happen. Do you know of a case where it does? > I'm still sorting out through the subframe navigation handling, so if you > actually wrote down somewhere how that works with this patch (given that cloning > the session history entries doesn't clone this cached presentation stuff), let > me know please... I suspect it works out in some cases because we only actually > do a load when we have a different history entry, on the specific docshell the > entry is different for, but doesn't this mean that going back to the result of a > subframe navigation across a toplevel load will be slow? I'm not entirely clear on how it does or should work either, or I would. Do you think just copying the nsIContentViewer pointer to the cloned entry would fix this? It seems like if you do that, once you restore one of the history entries, you would need to nuke the saved presentation pointer from any other entries that are using it.
(In reply to comment #48) > traversal via this pointer. What happens if someone fires a (trusted, so say > from chrome) DOM click event on a pointer to a node in a document that we've > moved away from? > The intent is that the cached document is completely dead to events. You have to explicitly go to session history to get a pointer to one, so I don't think we need to plug this at every entry point. > 2) Are there issues with the device context being shared by the cached > viewmanagers and the current live one? I'm not sure what could arise here; roc > might know this part better... So if we go back to a frameset from a non-frame > page, do we create new docshells for the subframes? And if so, do they create > new device contexts? Or is the device context held by the content viewer? If we go back to a frameset, the subframe docshells are cached with the toplevel content viewer (because the subdocuments will have a reference to them). So they will retain their original device contexts.
> not caching the presentation if there are any active requests in the load group What if the active request is a pending reflow? Or a pending image error or load event? Same thing? As far as that goes, what _is_ the plan for pending PLEvents of various sorts when the presentation is cached? Do we want them firing on the cached presentation? > That would be bad and should not happen. Then why do you need both the URI and the SHEntry? If the URI is always the URI in the SHEntry, just pass the latter? > Do you think just copying the nsIContentViewer pointer to the cloned entry > would fix this? I'm not sure, for the same reasons you're not sure (ownership issues, Hide/Show ordering, etc). > You have to explicitly go to session history to get a pointer to one Why? What if you're holding a pointer to a document (say in a different frame on the same website) and then the document gets unloaded before you do something with it?
Attached patch new patch (obsolete) — Splinter Review
Changes since the last patch: - Added a check for active requests in the docshell's load group, and skip caching the presentation if there are any. - Got rid of nsDocument::mIsGoingAway in favor of checking for null SGO. - Fixed nsEventListenerManager::HasUnloadEvents to check for beforeunload as well. - Made nsPluginDocument recreate its mStreamListener when restored from session history. - Moved nsISecureBrowserUI ownership to the docshell, so that we can always get at this to cache it (it formerly lived on the XUL , in a way that was inaccessible from C++). Added capture and restore of security state so that alerts fire as exepcted when going back/forward. - Added suspend and resume of meta-refresh loads. To do this I ended up adding to nsITimer so that you can retrieve the nsITimerCallback object. This doesn't address Boris's most recent comments yet, but I wanted to get this new code up.
Attachment #181463 - Attachment is obsolete: true
Attachment #181797 - Flags: superreview?(brendan)
Attachment #181797 - Flags: review?(bzbarsky)
(In reply to comment #51) > What if the active request is a pending reflow? Or a pending image error or > load event? Same thing? > > As far as that goes, what _is_ the plan for pending PLEvents of various sorts > when the presentation is cached? Do we want them firing on the cached presentation? > No, I don't really want anything to fire on the cached presentation. If it's not something that we can figure out how to suspend and resume, then I'd say it should be a no-cache condition. I'd rather start off overly-conservative and then gradually expand the types of pages that we can cache. > Then why do you need both the URI and the SHEntry? If the URI is always the URI > in the SHEntry, just pass the latter? > Could... aURI originates in LoadHistoryEntry if aSHEntry is non-null, so it looks like they definitely have to correspond. > > You have to explicitly go to session history to get a pointer to one > > Why? What if you're holding a pointer to a document (say in a different frame > on the same website) and then the document gets unloaded before you do something > with it? The document needs to act as if it has no presentation, in that case.
> - Got rid of nsDocument::mIsGoingAway in favor of checking for null SGO. Note that a document created via the DOMImplementation should have a null SGO, last I checked. Not sure about documents loaded via XMLHttpRequest and such, but that may be the case there too... I need to think about how embedding apps are affected by the secure browser UI change. In general, any time we have to use nsIDocShell in our chrome that means the functionality is not available to embeddors (since nsIDocShell is not frozen and never will be). > No, I don't really want anything to fire on the cached presentation. Then we probably need to audit all places where we create and post PLEvents and figure out what's supposed to happen for all of them.... Similar for various timers (the caret blink timer comes to mind). > The document needs to act as if it has no presentation, in that case. I'm not sure this patch accomplishes that. One other thing to watch out for -- computed style. Once someone gets a computed style object, we need to somehow invalidate it (or something?) when the document is cached.
(In reply to comment #54) > I need to think about how embedding apps are affected by the secure browser UI > change. In general, any time we have to use nsIDocShell in our chrome that > means the functionality is not available to embeddors (since nsIDocShell is not > frozen and never will be). > Darin and I went through this... the secure browser UI object is not accessible via any public interface on nsWebBrowser, either. So it should be no change. > > No, I don't really want anything to fire on the cached presentation. > > Then we probably need to audit all places where we create and post PLEvents and > figure out what's supposed to happen for all of them.... Similar for various > timers (the caret blink timer comes to mind). Right, we could also try harder to suspend things that might post such events. Even if we did that, it might not hurt to have a global "inactive" flag for a presentation that would kill anything like this. > > > The document needs to act as if it has no presentation, in that case. > > I'm not sure this patch accomplishes that. > No, I don't think it does. I'd say unless there are immediate known issues, that could be addressed later. > One other thing to watch out for -- computed style. Once someone gets a computed > style object, we need to somehow invalidate it (or something?) when the document > is cached. It might not hurt to see what Safari does in this case (and the above case, as well).
So just to be clear, I've somewhat accepted that we plan to check this in with known issues. I'd like to make sure those issues are not ones that affect the basic architecture of the the design. Things like deciding on the ownership of things, clear statement of our design invariants, and a plan for enforcing those invariants are, in my mind, prerequisites for sorting out what issues can be solved in the current design....
Attached patch new patchSplinter Review
Made sure to disable any active carets before putting a document into session history. Also added lots of comments in nsDocShell.h about how this all works.
Attachment #181797 - Attachment is obsolete: true
Attachment #181828 - Flags: superreview?(brendan)
Attachment #181828 - Flags: review?(bzbarsky)
Comment on attachment 181828 [details] [diff] [review] new patch bryner, great work. sr+a=me with the major issues below fixed before checkin; minor issues for your consideration only. My marks carry over if you attach a new patch fixing these issues shortly. With bz agreeing, I think you should check this in as soon as possible. Followup bugs should be filed and fixed as usual for any big patch, particularly on the XXX and XXXbryner issues, which should, if you follow Nat Friedman's wise advice, be FIXME: comments instead. (So do file followup bugs now). It would be good to have a followup bug about better global-LRU memory pressure instead of the at-most-five-by-default pref. I'll file it if you like; let me know. /be Major issues: CopyJSPropertyArray should use the JS_GetStringChars, JS_GetUCPropertyAttributes, JS_GetUCProperty, and JS_DefineUCProperty APIs to handle full Unicode. If any of those APIs fail, there was a JS error reported (OOM) or set as a pending exception (other than OOM), so you should propagate failure from CopyJSPropertyArray. It looks like user-defined window-level getters and setters (which will be enumerable) will be replaced by undefined values, which is bad. Instead of + jsval propvalue; + if (!::JS_GetProperty(cx, aSource, propname_str, &propvalue)) { + NS_WARNING("property was found, but get failed"); + continue; + } + + PRBool res = ::JS_DefineProperty(cx, aDest, propname_str, propvalue, + NULL, NULL, attrs); you want to declare JSPropertyOp getter, setter; call JS_GetUCPropertyAttrsGetterAndSetter (I just added this API) instead of JS_GetPropertyAttributes; and define the (Unicode-named) property with getter and setter passed instead of NULL and NULL. Why don't you have to save and restore mTimeoutInsertionPoint as well as mTimeouts? Minor issues: Nit: use two-space indenting in nsDocument::Sanitize(). Nit: localize input and form to their respective for loop body blocks in nsDocument::Sanitize() Nit: SubDocEnumData *args = NS_STATIC_CAST(SubDocEnumData*, arg); inSubDocHashEnum to avoid data->data (args->data, nice). In light of that: SubDocEnumArgs, not SubDocEnumData. DOMPageRestore is ok by me -- we should standardize this via whatwg or (hah! I'll believe it when I see it) the re-forming w3c DOM group. Uber-nit: in nsDocShell::CanSavePresentation, // difficult (i.e. XMLHttpRequest). s/i.e./e.g./ In nsDocShell::RestorePresentation, // Pulling the viewer out of the CachedPresentation will thaw() it. thaw refers to some old name for what method? open?
Attachment #181828 - Flags: superreview?(brendan)
Attachment #181828 - Flags: superreview+
Attachment #181828 - Flags: approval1.8b2+
Status: NEW → ASSIGNED
Flags: blocking1.8b2+
Oh, and bryner found what I should have remembered to mention: intervals need mWhen updating on resume. And we agreed timeouts and intervals should resume with whatever time remains in the current timeout or interval period, to avoid skewing outstanding timeouts from any related intervals. /be
Comment on attachment 181463 [details] [diff] [review] new patch patch is obsolete. Removing review-request.
Attachment #181463 - Flags: superreview?(brendan)
Attachment #181463 - Flags: review?(bzbarsky)
Comment on attachment 181797 [details] [diff] [review] new patch Patch is obsolete, removing review-request.
Attachment #181797 - Flags: superreview?(brendan)
Attachment #181797 - Flags: review?(bzbarsky)
Running with this, lots of windows, tabs, plugins, maps.google.com... crashed here: #0 0x006837a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x008ea624 in raise () from /lib/tls/libpthread.so.0 #2 0x080545fe in nsProfileLock::FatalSignalHandler () #3 #4 0xb6f00c9e in nsDocShell::CanSavePresentation () from /home/brendan/src/firefox-opt/mozilla/dist/bin/components/libdocshell.so #5 0xb6f03458 in nsDocShell::CreateContentViewer () from /home/brendan/src/firefox-opt/mozilla/dist/bin/components/libdocshell.so #6 0xb6f0c7ea in nsDSURIContentListener::DoContent () from /home/brendan/src/firefox-opt/mozilla/dist/bin/components/libdocshell.so Looking at the disassembly before $eip in frame 4, I'm thinking nsCOMPtr doc = do_QueryInterface(pWin->GetExtantDocument()); if (doc->HasRequests(aNewRequest)) return PR_FALSE; needs a null check. bryner? /be
Nits: printf("Copied window property: %s\n", propname_str); won't work too well now that propname_str is jschar *. /be
###!!! ASSERTION: Already have a root content! Clear out first!: '!mRootContent ', file c:/mozilla.org/trunk2/mozilla/content/base/src/nsDocument.cpp, line 1501 I'm guessing that's comment 37? things get real bad after that assert.
Perhaps nsGenericElement::ShouldFocus shoul default to false when there is no script global? That would make a lot more sense to me...
Comment on attachment 181828 [details] [diff] [review] new patch Drive-by review comments: +class WindowStateHolder : public nsISupports [...] + JSRuntime *mRuntime; In Mozilla there's only ever one JSRuntime, and you can always get to that through the nsIJSRuntimeService, so no need to store that here IMO. Or is this potentially needed after the service is no longer reachable? If so, add comments about that. Also nsDocument::Sanitize() should use 2-space indentation.
Attachment #181828 - Flags: superreview+ → superreview?(brendan)
Comment on attachment 181828 [details] [diff] [review] new patch I finally built a tree with this patch and did a run with XPCOM_MEM_LEAK_LOG set. Just start up and shut down. With this patch, that leaks 22 nsDocument objects (and all sorts of other stuff that they're holding references too, of course). That's not so good. Then I tried using this patch. Loaded http://lxr.mozilla.org/seamonkey/source/layout/base/nsCSSFrameConstructor.cpp then clicked the "CVS Log" link. So far so good. Then clicked back (which was fast). Then clicked forward. Then clicked back and watched nsCSSFrameConstructor be reparsed. I guess the problem is that I'm using SeaMonkey, so the max_viewers pref ended up being 0. But if it's zero, why did we cache the viewer for nsCSSFrameConstructor the first time around? That should be fixed. If I start the browser, open a new tab, load about:config in the new tab, filter for "svg", right-click on the svg.enabled pref, toggle it, and shut down the browser, I leak two more documents (24 instead of 22). That may just be because I loaded more XUL documents, of course. I got some assertions that seem to be caused by this patch: ###!!! ASSERTION: Can't get interface requestor: 'ifreq', file /home/bzbarsky/mozilla/xlib/mozilla/extensions/typeaheadfind/src/nsTypeAheadFin d.cpp, line 2096 ###!!! ASSERTION: failed to get the nsIScriptGlobalObject. bug 95465?: 'boundGlobal', file /home/bzbarsky/mozilla/xlib/mozilla/content/xbl/src/nsXBLPrototypeHandler.cpp, line 423 The former I can reproduce at will. Start the browser, hit Ctrl-T to open a new tab, then focus the content area and hit Ctrl-W to close the tab. Gives that assert and some nice frame manager warnings. The latter assert, I'm not quite sure how I got... I managed to reproduce it once more while messing with about:config for the "svg" pref, though. With this patch, I still fail the "resuming network activity" testcase, most of the time (more on this when I comment on the docshell changes). When I _do_ pass it, going forward again I get: ###!!! ASSERTION: Request list is not empty.: 'mRequests.entryCount == 0', file /home/bzbarsky/mozilla/xlib/mozilla/netwerk/base/src/nsLoadGroup.cpp, line 410 ###!!! ASSERTION: Foreground URLs are active.: 'mForegroundCount == 0', file /home/bzbarsky/mozilla/xlib/mozilla/netwerk/base/src/nsLoadGroup.cpp, line 411 On the "forms in cached document submit" testcase, if I just click the link in the subframe and then go back (without clicking the button), I get: ###!!! ASSERTION: Unexpected root view: 'rootView == origView->GetViewManager()->RootViewManager()->GetRootView()', file /home/bzbarsky/mozilla/xlib/mozilla/view/src/nsViewManager.cpp, line 4108 on every paint... Sounds like restoring viewmanager state for subdocuments doesn't quite work. Moving on to the patch itself: >Index: browser/app/profile/firefox.js Please make a similar change to browser-prefs.js. Or maybe the C++ code should default to something nonzero here, so embeddors get this feature automatically (and those who don't want it have to disable it explicitly)? >Index: content/base/public/nsIDocument.h >@@ -52,6 +52,7 @@ >+#include "pldhash.h" Why? I don't see you using it anywhere in this header... >+ /** >+ * Sanitize the document by resetting all input elements and forms that have >+ * autocomplete=off to their default values. Do we really want to be that specific? What effect, if any, should Sanitize have on XForms? Might want to file a followup bug on that. Certainly not an initil-landing blocker. >+ typedef PRBool (*nsSubDocEnumFunc)(nsIDocument *aDocument, void *aData); What does the return value mean? >Index: content/base/src/nsDocument.cpp >@@ -1616,7 +1618,12 @@ nsDocument::RemoveStyleSheet(nsIStyleShe >+ // If there is no script global object, we assume that either: >+ // 1) Document teardown is in progress >+ // 2) The document is in session history not being shown; it >should not be That's not a good assumption for data documents, etc (though they may not ever get RemoveStyleSheet called on them; I'm not sure). Wouldn't it make sense to key off of mInDestructor here, if that's the only place where we remove all our content? >@@ -1832,7 +1839,7 @@ nsDocument::GetScriptGlobalObject() cons >+ if (!mScriptGlobalObject) { > nsCOMPtr requestor = > do_QueryReferent(mDocumentContainer); I don't see anything in this patch that changes a document's container... So it looks to me like cached documents will point to their original docshell as the container, and will point to the window in that docshell as their script global. Is that really desirable? jst might be able to say for sure, but it seems wrong to me... Then again, it's no different from what we already had if someone held a pointer to a document past the point where the window loaded a new document... >@@ -1847,37 +1854,6 @@ nsDocument::GetScriptGlobalObject() cons > nsDocument::SetScriptGlobalObject(nsIScriptGlobalObject *aScriptGlobalObject) I'll bet that the changes to this function are causing the leaks I observed, fwiw. >+nsDocument::Sanitize() Is there a reason we don't have to worry about autocomplete="off"