JS Library De-eval()-er

This page runs a library that uses eval and Function, but instruments them, in order to figure out ahead of time what code the library actually needs to run.

In cases where this stays the same generally, and there is not a billion lines of evaluated code, this allows generating a library that does not require eval and Function to be used, so you can do away with unsafe-eval and in the Content-Security-Policy.

This doesn't work for all libraries, but it works for the ones I use. Libraries that use eval and Function for performance reasons only are more likely to work. But something that uses it for a JavaScript command prompt (like on-page dev tools) will not.

This tool doesn't detect when the code evaluation is dynamic or not. It simply generates code to replace eval and Function with versions that only allows snippets of code already seen while running on this page.

The output is a monkey patch to be loaded before any code that uses eval and Function. The monkey patch can be included in the library itself, or in a separate file.

Won't this miss certain cases?

There are other ways of accessing eval and Function, such as (function(){}).constructor("alert('hey')")(); but the intent of this tool is not to catch all possible cases in order to directly prevent access to eval, but rather to allow you to prevent access with Content-Security-Policy's script-src.

Won't this generate huge amounts of code?

For code that evaluates code using templates, as a way of metaprogramming, it may lead to a huge amount of code. However, it should compress well, as it is very repetitive.

Parsing performance may still be an issue.

It will be interesting to test this. I would expect it to work better if you include the monkey patch as a wrapper around the library, so that it can compress together with the library.

Can this work without first running the code using eval?

It would be possible to generate combinatorially all possibilities of code to be generated, in some cases, however in general it is undecidable.

It may be worthwhile to attempt, but in this project, it wasn't necessary to statically analyze the code.

Executing it with instrumentation was actually quite simple and effective.

How does this behave differently from native eval?

The generated functions do not run in the same context as the original code. So this makes eval more like how Function works. You may get ReferenceErrors if eval accesses variables in the surrounding code.

This could be fixed by passing a function to get/set variables from the surrounding code into each eval call site (that needs it). This would need some static analysis to determine which variables are accessed... or, to do it lazily, perhaps every valid JS literal within the eval code could be assumed as possibly accessing a variable outside, and getters/setters generated for it, and the functions generated for recorded eval calls could be wrapped in with (contextGettersAndSetters) {} and the contextGettersAndSetters is passed in to each eval call site, so that the inner code does not need to be modified into function calls.

Will you make this into a reusable tool?

I'd like to, yes. I think it would be very valuable for tightening security in various projects.

For now, this is part of Tracky Mouse. MIT-licensed.

That said, if you need this, you can copy this HTML file and change the code it loads. It should be pretty easy to use already.