There’s languages designed with Capabilities in mind. Like, whatever starts the program gets to decide what functionality is exposed to the running program. It’s great for situations where you might run untrusted code and want to, as an example, not allow network access, or filesystem access.
More generally there’s also sandboxing techniques that runtimes provide. Webassembly for instance is designed for programs to run in their own memory space with a restricted set of functions and, again, Capabilities. This might be nice if you ever work on a cloud application that allows users to upload their own programs and you want to impose limits on those programs. Think AWS Lambda, except the programs running wouldn’t necessarily even have access to the filesystem or be able to make web requests unless the user configures that.
It might be a good design space for even more esoteric areas, like device drivers. Like, why worry if your GPU drivers are also collecting telemetry on your computer if you can just turn off that capability?
There’s older applications of sandboxing that are a bit further from what you’re asking as well; like, iframes on a webpage; allowing code served from different servers you don’t necessarily control to run without needing to worry about them reading access tokens from local storage.
Or even BSD Jails and chroot.
Good question 💖
I guess a more modern example you might run into is something like Rust’s no_std environment; which strips out the standard library of the language that doesn’t work on every device the language is designed to target (namely microcontrollers that don’t even have an operating system on them). Or like, maybe you’re writing your own operating system.
Another example comes to mind of a company, General Magic, that designed a programming language with a similar Capabilities system meant to restrict access to functions and code on their devices with the idea of copyright enforcement in mind as a primary use case. There’s a documentary about the device if you’re interested: https://www.generalmagicthemovie.com
Deno is an example of a language runtime (based on Javascript+Typescript) that’s been built with capabilities in mind. By default, programs aren’t allowed to touch the filesystem or network (except to allow static imports to run; fallible dynamic import calls that could be used to determine something about the filesystem or network are restricted like other IO). Programs can start up worker threads that have further permission restrictions than the main program.
How does that answer the question? Sandboxing has nothing to do with import operators in languages.
I think Roc has some ideas like this.
On a sandboxing level, I suppose we’d be talking about Unikernels (which seem cool, but the tooling didn’t look simple enough to experiment with them)
Not a language per se, but subsets of languages used for fantasy consoles usually do not implement import functionality. TIC-80, PICO-8, etc. etc. WIKI I wouldn’t call that a feature, but it drives you to write less and more space-optimized code.
Now that I think about it, source code size could be a feature in itself, look at codeGolf-oriented esolangs:- Pyth
- CJam
- GolfScript
- Microscript/Microscript II
- Seriously
- Rotor
- Minkolang
- Gaia
- Jelly
- 05AB1E
- japt
- and more…
The unnamed language that is compiled by
cc
.To elaborate… C[++] is really two different languages, with mostly distinct feature sets, handled in most cases by different compilers, interpreters, parsers, etc.
The unnamed language with keywords like
and
which produces text output is a templating system that is functionally independent of the unnamed language with keywords like
for
andunsigned
which actually compiles to a binary.You can use
cpp
to run all the logic and conditionals in that first language to produce output, even if you replace the second language with something else like python or assembly.You can use
cc
to compile that second language from source to binary, without support from the preprocessor.That second language, the one that
cc
understands and compiles, does not have the ability to import functions or values or whatever from other files.I’d argue that’s not true. That’s what the extern keyword is for. If you do
, you don’t get the actual
printf
function defined by the preprocessor. You just get an extern declaration (though extern is optional for function signatures). The preprocessed source code that is fed tocc
is still not complete, and cannot be used until it is linked to an object file that definesprintf
. So really, the unnamed “C preprocessor output language” can access functions or values from elsewhere.No, it can’t. The compiler can’t do anything with content from any file not explicitly passed to it. You’re mixing up the compiler and the linker (and the linker has nothing to do with either language, it can link binaries compiled from any language).
Holy shit this is fascinating!
I’m not smart enough to know, but what kind of use case(s) do you imagine for this?
I know encapsulation is desirable in part because of security. I figured something similar could happen by removing the ability to import anything from another program. However, I struggled to think about other situations in which no imports were desirable, and so I wondered…
There are certainly situations where it would be valuable to be able to place limits on what can be imported, but I can’t imagine trying to work with a language that was completely devoid of imports. Because that would mean 100% of your source would have to be in a single file, which sounds absolutely awful for anything but the most trivial applications.
Knowing programmers, I think a major unintended side effect of such a paradigm would be HUGE monolithic source files.
Nah, you’d just get a preprocessor like C/C++ to do #include for you prior to compiling.