> It sounds like you're asking "If I can learn an address, have I defeated > ASLR", and the answer is usually yes. Really? Because leaking a heap address in windows, openbsd, etc doesn't yield a full collapse of all loaded modules randomization given the preconditions; I'm asking that it's not just my box exhibiting this behavior- which is a long story why it must just be mine. >It depends on the circumstances of > course, but leaking any address to an attacker would usually be considered a > bug and renders ASLR essentially useless. Well, you are somewhat missing the gravity here. If this is generally reproducible, you don't need the address to leak, you just need a series of arithmetic operations to land you at a fixed offset within the target module. no read back requisite. I wouldn't argue that finding the exact required arguments for arithmetic was not rare however, just that an ASLR "metaprogramming primitive" potentially exists. > For example, if you can find some JavaScript that tells you the address of > an object on the heap or the base address of a module, that would be > considered a security bug. I'm fairly positive that no ASLR scheme is intended to entirely and totally collapse given a single address that you don't necessarily even need to know. Thus I find it hard to believe this is the case. >You don't usually run untrusted python, > so > python's id() isn't a bug - but you do run untrusted JavaScript. Really? Because your employer does exactly that. The bigger question was the behavior, not python. It seems a practical extension of the spy in the sandbox stuff to potentially grab enough of an address to leverage this in javascript, although code is not yet forthcoming there. However giving the cache line and physical to virtual address scheme mappings, this seems likely.
Source: Gmail -> IFTTT-> Blogger
No comments:
Post a Comment