|Paradigm||Multi-paradigm: multiple dispatch (core), procedural, functional, meta, multistaged|
|Designed by||Jeff Bezanson, Alan Edelman, Stefan Karpinski, Viral B. Shah|
|Developer||Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and other contributors|
1.2.0 / 20 August 2019
1.3.0-rc2 / 12 September 2019 / 1.4.0-DEV with daily updates
|Typing discipline||Dynamic, nominative, parametric, optional|
|Implementation language||Julia, C, C++, Scheme, LLVM|
|Platform||Tier 1: x86-64, IA-32, CUDA|
Tier 2: ARM (both 32- and 64-bit) and Tier 3: PowerPC
|OS||Linux, macOS, Windows and FreeBSD|
|License||MIT (core),GPL v2; a makefile option omits GPL libraries|
Julia is a high-level programming language designed for high-performance numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism, a fully dynamic programming language, and multiple dispatch as its core programming paradigm. It allows concurrent, parallel and distributed computing, and direct calling of C and Fortran libraries without glue code. A just-in-time compiler that is referred to as "just-ahead-of-time" in the Julia community is used.
Julia is garbage-collected, uses eager evaluation, and includes efficient libraries for floating-point calculations, linear algebra, random number generation, and regular expression matching. Many libraries are available, including some (e.g., for fast Fourier transforms) that were previously bundled with Julia and are now separate.
Work on Julia was started in 2009, by Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and Alan Edelman, who set out to create a free language that was both high-level and fast. On 14 February 2012 the team launched a website with a blog post explaining the language's mission. In an interview with InfoWorld in April 2012, Karpinski said of the name "Julia": "There's no good reason, really. It just seemed like a pretty name." Bezanson said he chose the name on the recommendation of a friend.
Since the 2012 launch, the Julia community has grown, with over 10,000,000 downloads from as of September 2019 The Official Julia Docker images, at Docker Hub, have seen over 4,000,000 downloads as of January 2019. The JuliaConacademic conference for Julia users and developers has been held annually since 2014.(and is used at more than 1,500 universities),
Version 0.3 was released in August 2014, version 0.4 in October 2015, and version 0.5 in October 2016. Julia 0.6 was released in June 2017, and was the stable release version until 8 August 2018. Both Julia 0.7 (a useful release for testing packages, and for knowing how to upgrade them for 1.0) and version 1.0 were released on 8 August 2018. Work on Julia 0.7 was a "huge undertaking" (e.g., because of "entirely new optimizer"), and some changes were made to the syntax (with the syntax now stable, and same for 1.x and 0.7) and semantics; the iteration interface was simplified.
Most packages that work in Julia 1.0.x also work in 1.1.x or newer, enabled by the forward compatible syntax guarantee. The major exception is, for interacting with non-Julia code, the JavaCall.jl package (however calling other languages, e.g. R language works, with the package for R fixed) to call Java, Scala etc. So to use those languages with Julia, and e.g. JDBC.jl or Apache Spark (through Spark.jl), users can choose to stay with the LTS version of Julia for now, as a milestone is set for a fix in Julia 1.4 (however there's already a workaround for Julia-1.3.0-rc1) which has a due date for 12 December 2019. Julia 1.3 and 2.0 (and later) currently have no set due dates.
Julia has attracted some high-profile users, from investment manager BlackRock, which uses it for time-series analytics, to the British insurer Aviva, which uses it for risk calculations. In 2015, the Federal Reserve Bank of New York used Julia to make models of the US economy, noting that the language made model estimation "about 10 times faster" than its previous MATLAB implementation. Julia's co-founders established Julia Computing in 2015 to provide paid support, training, and consulting services to clients, though Julia itself remains free to use. At the 2017 JuliaCon conference, Jeffrey Regier, Keno Fischer and others announced that the Celeste project used Julia to achieve "peak performance of 1.54 petaFLOPS using 1.3 million threads" on 9300 Knights Landing (KNL) nodes of the Cori II (Cray XC40) supercomputer (then 6th fastest computer in the world; at its peak the supercomputer was 5th fastest, and while still on the TOP500 list, it's no longer one of the top 10). Julia thus joins C, C++, and Fortran as high-level languages in which petaFLOPS computations have been achieved.
Three of the Julia co-creators are the recipients of the 2019 James H. Wilkinson Prize for Numerical Software (awarded every four years) "for the creation of Julia, an innovative environment for the creation of high-performance tools that enable the analysis and solution of computational science problems."
Julia has received contributions from 800 developers worldwide. Dr. Jeremy Kepner at MIT Lincoln Laboratory was the founding sponsor of the Julia project in its early days. In addition, funds from the Gordon and Betty Moore Foundation, the Alfred P. Sloan Foundation, Intel, and agencies such as NSF, DARPA, NIH, NASA, and FAA have been essential to the development of Julia.
Though designed for numerical computing, Julia is a general-purpose programming language. It is also useful for low-level systems programming, as a specification language, and for web programming: both for server web use and for web client programming.
According to the official website, the main features of the language are:
Multiple dispatch (also termed multimethods in Lisp) is a generalization of single dispatch – the polymorphic mechanism used in common object-oriented programming (OOP) languages – that uses inheritance. In Julia, all concrete types are subtypes of abstract types, directly or indirectly subtypes of the Any type, which is the top of the type hierarchy. Concrete types can not themselves be subtyped the way they can in other languages; composition is used instead (see also inheritance vs subtyping).
Julia draws significant inspiration from various dialects of Lisp, including Scheme and Common Lisp, and it shares many features with Dylan, also a multiple-dispatch-oriented dynamic language (which features an ALGOL-like free-form infix syntax rather than a Lisp-like prefix syntax, while in Julia "everything" is an expression), and with Fortress, another numerical programming language (which features multiple dispatch and a sophisticated parametric type system). While Common Lisp Object System (CLOS) adds multiple dispatch to Common Lisp, not all functions are generic functions.
In Julia, Dylan, and Fortress extensibility is the default, and the system's built-in functions are all generic and extensible. In Dylan, multiple dispatch is as fundamental as it is in Julia: all user-defined functions and even basic built-in operations like
+ are generic. Dylan's type system, however, does not fully support parametric types, which are more typical of the ML lineage of languages. By default, CLOS does not allow for dispatch on Common Lisp's parametric types; such extended dispatch semantics can only be added as an extension through the CLOS Metaobject Protocol. By convergent design, Fortress also features multiple dispatch on parametric types; unlike Julia, however, Fortress is statically rather than dynamically typed, with separate compiling and executing phases. The language features are summarized in the following table:
|Language||Type system||Generic functions||Parametric types|
|Common Lisp||Dynamic||Opt-in||Yes (but no dispatch)|
|Dylan||Dynamic||Default||Partial (no dispatch)|
By default, the Julia runtime must be pre-installed as user-provided source code is run. Alternatively, a standalone executable that needs no Julia source code can be built with ApplicationBuilder.jl and PackageCompiler.jl.
Julia's syntactic macros (used for metaprogramming), like Lisp macros, are more powerful and different from text-substitution macros used in the preprocessor of some other languages such as C, because they work at the level of abstract syntax trees (ASTs). Julia's macro system is hygienic, but also supports deliberate capture when desired (like for anaphoric macros) using the
The Julia official distribution includes an interactive session shell, called Julia's read-eval-print loop (REPL), which can be used to experiment and test code quickly. The following fragment represents a sample session example where strings are concatenated automatically by println:
julia> p(x) = 2x^2 + 1; f(x, y) = 1 + 2p(x)y julia> println("Hello world!", " I'm on cloud ", f(0, 4), " as Julia supports recognizable syntax!") Hello world! I'm on cloud 9 as Julia supports recognizable syntax!
The REPL gives user access to the system shell and to help mode, by pressing
? after the prompt (preceding each command), respectively. It also keeps the history of commands, including between sessions. Code that can be tested inside the Julia's interactive section or saved into a file with a
.jl extension and run from the command line by typing:
$ julia <filename>
Julia is in practice interoperable with many languages. Julia's
ccall keyword is used to call C-exported or Fortran shared library functions individually.
Julia has Unicode 12.0 support (and latest 12.1.0 support, which adds only one letter, in Julia 1.3), with UTF-8 used for strings (by default) and for Julia source code, meaning also allowing as an option common math symbols for many operators, such as ? for the
Julia's core is implemented in Julia and C, together with C++ for the LLVM dependency. The parsing and code-lowering are implemented in FemtoLisp, a Scheme dialect. The LLVM compiler infrastructure project is used as the back end for generation of 64-bit or 32-bit optimized machine code depending on the platform Julia runs on. With some exceptions (e.g., PCRE), the standard library is implemented in Julia itself. The most notable aspect of Julia's implementation is its speed, which is often within a factor of two relative to fully optimized C code (and thus often an order of magnitude faster than Python or R). Development of Julia began in 2009 and an open-source version was publicized in February 2012.
While Julia uses JIT (MCJIT from LLVM) – Julia generates native machine code directly, before a function is first run (not bytecodes that are run on a virtual machine (VM) or translated as the bytecode is running, as with, e.g., Java; the JVM or Dalvik in Android).
Julia has four support tiers, and currently supports all x86-64 processors, that are 64-bit (and is more optimized for the latest generations) and all IA-32 ("x86") processors except for decades old ones, i.e., in 32-bit mode ("i686", excepting CPUs from the pre-Pentium 4-era); and supports more in lower tiers, e.g., tier 2: "fully supports ARMv8 (AArch64) processors, and supports ARMv7 and ARMv6 (AArch32) with some caveats."CUDA (i.e. "Nvidia PTX") has tier 1 support, with the help of an external package.
Julia's generated functions are closely related to the multistaged programming (MSP) paradigm popularized by Taha and Sheard, which generalizes the compile time/run time stages of program execution by allowing for multiple stages of delayed code execution.
Julia's Base library, largely written in Julia itself, also integrates mature, best-of-breed open source C and Fortran libraries for ...
Note that this commit does not remove GPL utilities such as git and busybox that are included in the Julia binary installers on Mac and Windows. It allows building from source with no GPL library dependencies.
He has co-designed the programming language Scheme, which has greatly influenced the design of Julia
using FFTWin current versions; that dependency is one of many moved out of the standard library to a package because it is GPL licensed, and thus is not included in Julia 1.0 by default.) "Remove the FFTW bindings from Base by ararslan · Pull Request #21956 · JuliaLang/julia". GitHub. Retrieved 2018.
JeffBezanson modified the milestones: 1.3, 1.4
Celeste is written entirely in Julia, and the Celeste team loaded an aggregate of 178 terabytes of image data to produce the most accurate catalog of 188 million astronomical objects in just 14.6 minutes [..] a performance improvement of 1,000x in single-threaded execution.
General Purpose [..] Julia lets you write UIs, statically compile your code, or even deploy it on a webserver.
In summary, even though Julia lacks a multi-threaded server solution currently out of box, we can easily take advantage of its process distribution features and a highly popular load balancing tech to get full CPU utilization for HTTP handling.
to import modules (e.g., python3-numpy)
string(greet, ", ", whom, ".\n")example for preferred ways to concatenate strings. Julia has the println and print functions, but also a @printf macro (i.e., not in function form) to eliminate run-time overhead of formatting (unlike the same function in C).
The older implementation (llvm::JIT) is a sort of ad hoc implementation that brings together various pieces of the LLVM code generation and adds its own glue to get dynamically generated code into memory one function at a time. The newer implementation (llvm::MCJIT) is heavily based on the core MC library and emits complete object files into memory then prepares them for execution.
A list of known issues for ARM is available.
Julia works on all the Pi variants, we recommend using the Pi 3.