New source code license

Personal note: I’m not a native English speaker, this post might contain mistakes, typos, etc. I’ve written it as fast as possible.

From Aseprite v1.1.8 the program source code license has changed from the GPLv2 to a new EULA that still gives you the possibility to compile and modify the program for your own purposes, but it doesn’t allow you to redistribute Aseprite.

For most users and customers this doesn’t affect them at all, but for several other users (mainly for people using Aseprite from Linux distributions) this is a big red flag. On Linux terminology: Aseprite is non-free software from now on. What does this mean? Linux distributions cannot package and distribute Aseprite freely anymore.

First we have to ask why the license didn’t change before?

This post is separated in several sections. It is a whole travel through my mind in these years developing Aseprite. So here we go.

Selling free software

From the very beginning (1998~2001), I started developing Aseprite for free in my spare time as an innocent activity. I don’t know exactly when I’ve decided to create “a product,” but maybe after Aseprite v0.9.6-beta1, or when I turned 30 years old, or after reading Xah Lee ideas about Emacs. Then Aseprite v1.0.0 came out in 2014 and it was the first commercial version. Selling GPL software was a reality. (Note: I remember that a lot of people hated this decision.)

Selling free software sounds great: You can live doing what you like (programming), give support to users (fixing bugs, new features, emails, etc.), people can contribute patches, and the program is still “free.”

But the reality is quite different: When you decide to start living from your own software (in my case, from March 2015), and your software is not a service, there is a feeling inside your head that makes you tremble: all your code is out there available for free.

My decision back then was (and I still think the same): The code is not as important as the design, the decisions and choices I’ve made, and the vision of what product I want to make. Modifying code doesn’t matter as much as how well you handle complexity and transform that complexity into a product, with a reduced number of bugs and a proper user experience (UX).

A lot of open source projects have a complete lack of vision, “good taste” (whatever that means), or a minimal thinking about the UX. If you want to help an open source project, you can: 1) learn how to handle complexity, 2) listen the user complains, 3) try to improve the UX from there.

Contributions and patches

It sounds nice that people can contribute patches, but the reality is that it requires a lot of time. You need to stop doing what you were working on, change the priority of things in the roadmap (because the patch might be something that you was expecting to implement in a non-near future), review the code, test in all platforms, think about a nice UX, maybe propose more changes to the pull request, repeat the process, and merge the change.

Despite all this, I’ve always thought that keeping the source code of Aseprite available was a good idea. (And I still think the same, so Aseprite source code is still available for contributions.)

Open Source vs. Piracy

If someone doesn’t want to (or just cannot) pay for your work, she/he is not a customer. And in that case he/she will try to find other ways to get the program. You just have to give up. Personally I think that sharing source code is like declaring that explicitly: “If you cannot afford it, you might try to compile the program and use it anyway until you can pay for it or feel that worth it.”

I recommend this reading: “Piracy is a Thing - Give Up”, and “I Don’t Sell Games; I Sell Self-Satisfaction”.

Linux Distributions vs. Developers

From long time ago, Linux distributions (I think Debian was the first one) have packaged and distributed Aseprite. I welcomed this, even after I started selling Aseprite. (Also I’ve to thanks to Tobias Hansen for this, a Debian package maintainer, which also contributed with some patches.)

But in these months I started seeing a recurrent pattern: some people think that Aseprite is available in Ubuntu or other distributions because it is “just there” in the software center. It looks like Aseprite is free just because the distribution made it free (and not because the developer’s decision, the developer is basically inexistent).

And that was the trigger of several issues that I have with Linux distributions from long time ago:

  1. Almost none Linux distribution assert that they “are not associated in any way with the developer of the application.”
  2. They do not show who is the real developer of the application in a clear way. Sometimes misleading or confusing people that the developer (who fix bugs and add new features) is the (package) “maintainer.” Sometimes they include a link to the program website, but theres is no “official support contact email.”
  3. They do not show buttons/links/a simple way to contribute to the real developer of the program. In my specific case, there is a way to buy the official version or donate just in case you want to contribute to the development.
  4. In an alternative world, open source programs that are being sold, could have a charge, and a % could be for the distribution, other % for the package maintainer, and other % for the developer.
  5. I’m not sure how is the review system for applications on Ubuntu or other Linux distributions, but I think they don’t have a way to contact the developer before the user makes the review, the review is just based on what was packaged.

I could also talk about the packaging mess that all distribution made for application development, or the lack of interfaces to interact with the desktop. But those are other Linux Distributions vs. Developers rants.

The worst Linux issue are crash reports. I think that Linux distributions should have an automatic mechanism to save/show crash reports (a la OS X), or share those crash reports automatically with the application developer. (In this case I think Ubuntu Apport and whoopsie are making the best effort.)

From my point of view it’s happening a complete inverse process: Windows memory dumps and OS X crash reports are useful to detect bugs that then will benefit Linux users.

This is one example of an OS X crash report. A pretty nice example that a bug detected on OS X benefits all platforms. So the question here is why should I distribute software on Linux if the benefit to other platforms is almost null?

Advices for Linux distributions

  1. Show clear information in your Linux distribution website and software center about 1) who is the developer, 2) that you are not associated with the developer, 3) an official support email address, and 3) be friendly with the developer distribution mechanism (e.g. donation link, charge for software, etc.).
  2. Make an automatic mechanism a la OS X to show/send/copy-and-paste/print-screen/etc. crash reports.

From my point of view, it looks like Linux distributions want to just distribute software, but they don’t work on ways to help developers to integrate software with the distribution. E.g. better ways to find bugs on that software, better ways to interact with the desktop, etc.

So why not GPL?

I don’t think anymore that distribution of software is a good thing just because it can be freely distributed without any cost. (Even from an ecological point of view.) So these comments don’t apply to the GPL only.

Software is a tool, but also is about making the tool, improving it. Each user should help in certain way to the development of that tool (e.g. paying for it, compiling it, reporting bugs, etc.). If one user get a crash, we should have an opportunity to fix the bug from that crash and benefit all other users (from all platforms).

GPL is about “giving everything to the user” and asking for nothing. That kind of mentality might contribute to the destruction of the software (or destruction of people). Users have a responsibility, they should be encouraged to contribute as much information as possible to fix bugs, and we have the responsibility to facilitate this process as much as possible.

I remember an extreme case of this kind of mentality: The Heartbleed Bug (and OpenSSL is released under a Apache-like license).

Future contributions to the open source community

Several parts of Aseprite are released under the MIT license and we will continue releasing more code with this permissive license in benefit of other projects. We have two long term objectives:

  1. Create a MIT licensed library to develop desktop applications. This should include a way to handle crash reports on each platform.
  2. Create a MIT licensed Aseprite-CLI to be integrate easily in all kind assets pipelines and third party tools.

Comments

Please, be respectful in the comments section. I’ve already received several insults that just show the worst side of the “free software community,” and I know that there are a lot of good people out there in this community. Encouragement words are also welcome.

Forward Compatibility

The next Aseprite version (v1.2-beta1) will contain a new feature to create layer groups, which aren’t supported on v1.1. My plan is to release Aseprite v1.1.6 with some forward compatibility: It is a way to load the new .aseprite format with layer groups and convert them to something readable/usable in the old v1.1 version (and show an huge warning about it).

My first approach is that if we create something like this in v1.2:

With Groups

Loading the .aseprite file into v1.1 will show you something like this:

Flat

So basically groups are removed and all layers are moved at the same level.

I’m not sure if v1.2 will present other features which will require “forward compatibility” considerations, but in that case, a new v1.1 should be made available to users.

What do you think about this approach?

Color with alpha

Hi everyone! This week I wasn’t able to work on Aseprite as much as I’d wish (as result of some personal issues). Anyway as there is a big change coming for the next release I think that some feedback from you will be a great help. (Make your comments below.)

Next version will contain better handling of Alpha component (issue 286). It means that palette entries, foreground color, and background color will have an Alpha component:

Color with alpha

The idea is that you can replace RGBA values from each pixel in an easy way. You pick a pixel (RGBA values) and then you replace pixels (RGBA values). This is pixel-art friendlier, but requires some important User Interface changes.

Default Ink

On current Aseprite versions the “Default Ink” composites the paint color with the layer color depending on the “Opacity” level. Now we would prefer a “Replace Pixel” ink by default. In this way when you paint, all four components (RGBA) are replaced:

Replace pixel ink

The new default ink (“Replace Pixel”) doesn’t use “Opacity”, so the Context Bar will not need the “Opacity” slider by default (It’s replaced by the Alpha component in the current color). It makes a cleaner initial interface.

The “Opacity” slider will be used to control the intensity of the tool change, and will be available only for:

  • “Alpha Compositing” ink (merge paint RGBA + layer RGBA), or
  • “Lock Alpha” ink (merge RGB + layer RGB, doesn’t modify Alpha), or
  • Effect tools (like Blur or Jumble).

Eyedropper

Eyedropper will contain more options to grab different components:

Eyedropper

The default “Color+Alpha” option will pick RGB+Alpha, Gray+Alpha or just the palette index depending on the sprite color mode (RGB, Grayscale, or Indexed). But now you will be able to choose other options. E.g. pick RGB values without modifying the current Alpha, or pick just the Alpha component so RGB values stay the same, etc.

Generating RGBA palettes

We will be able to generate color palettes with Alpha components:

Generating palette

These kind of palettes can be saved on 8-bit indexed PNG images, which support the alpha component on palette entries.

June Progress

Here are some details about the main changes made in these weeks.

Layer blend modes

Layer blend modes are implemented for the next version:

I was looking for a library to do this for me (pixman, oiio, etc.), but almost all libraries I’ve found use premultiplied alpha. In a next post I’ll write about this, at this moment you need to know that we use straight color (non-premultiplied alpha) for alpha compositing.

The implementation of blend modes came from the PDF specification (section 11.3.5) and fixes for Color Dodge and Color Burn modes.

More (and less) than 256 colors

Right now color palettes have 256 colors. Palettes with less than 256 colors show black spots in these empty spaces. In the next version we’ll have a better control over the number of color in the palette:

Indexed images will continue using 8-bit (256 colors), but palettes might contain more than 256 colors. Anyway this change brings new problems and situations that are being tested, e.g. palettes with less than 256 colors but images with indexes referring a color out of the range.

Special color modes

Actually we have two main color modes:

  • RGBA images: each pixel is RGBA and we see the result as RGBA (full-color).
  • Indexed images: each pixel has an index which refers to a palette color, the result is another indexed image, i.e., the output contains only colors from the palette.

But I’m experimenting with some special color modes:

  • Indexed images with RGBA output: Layers are Indexed, but the whole composition outputs a RGBA image. This is possible because blend modes can affect the result, different level of alphas for each palette entry, layer and cel opacity, etc.
  • RGBA images with indexed output: Layers are RGBA, but the output contains only palette colors. It means that all the compositing is done in RGBA, but the final render is dithered automatically in real-time.

The following is a demonstration of this RGBA -> Indexed color mode using a real-time dithering technique (the output is adjusted automatically depending on the number of available colors):

At this moment I’m thinking how to present all these advanced options to the user to avoid confusion. Moreover, we don’t have to forget that RGBA vs. Indexed modes are already a problem for some users. (Some people find the RGBA color mode more intuitive/expected behavior than the indexed one.)

Linux packages

I’ve dedicated a week to Linux distributions. Fixing compilation problems, preparing Virtual Machines, and creating scripts that start a vagrant box, compile, and package automatically.

I already have a working Aseprite package for Ubuntu 32-bit that I’d like to make it run in 64-bit too (same .deb file for both distros). But it needs some adjustments yet. Also I hope to have a working .rpm for Fedora in these days.

See you next week!

Data Recovery

From v1.1-beta3, Aseprite includes a data recovery feature. It was originally planned for v1.3, but as several new internalrefactors were made for v1.1 (increasing the probability of crashes), it was rescheduled for v1.1 milestone.

When the program crashes and you execute it again, it tries to restore sprites that were being edited before that crash. Then it shows you this huge horrible button at the top to start the recovery process:

Recover Lost Sprites button

But, how does this exactly work?

All objects in the doc layer has a specific ID and version properties. Every single time we modify a sprite this version is incremented. (E.g. when we draw something, move a layer, change a layer name, etc.)

So when the program starts, it launches a background thread to save your modifications every 2 minutes. This period is configured in Edit > Preferences:

To save the modifications, this thread locks the sprite with a DocumentReader lock, compares all document’s internal object versions (layers, cels, images, etc.), and if something has changed, that specific object is re-saved on disk. Here is a demonstration of one sprite and its session folder:

Session example

These little files contain the internal state of each object. When a document is closed, the folder is completely deleted, but if the program crashes, it remains and then a next instance of Aseprite can restore the sprite from the folder.

Tip: The only way to do something like this is accessing to your business layer with read and write locks each time you want to see or modify your document structure. On Aseprite, documents have read and write lock counters: only multiple read locks are allowed, and a write lock can be only obtained if there is no other reader (or we want to convert a reader to a writer lock).

The Tool Loop

Probably, the most craziest part of source code inside Aseprite is the “tool loop.” And as it is the craziest one, it is the one I’m most proud of (even when the code is ugly as hell). So, what is it?

Here is a “tool loop” in action:

It’s the whole process that begins when a mouse button is pressed, continues when the mouse is moved, and ends when the same mouse button is released.

Aseprite is distributed with a little gui.xml file. In this file there is a definition for each tool in the <tool> section. For example, the pencil tool is defined as:

<tool id="pencil"
      text="Pencil Tool"
      ink="paint"
      controller="freehand"
      pointshape="brush"
      intertwine="as_lines"
      tracepolicy="accumulate"
      />

What do those ink/controller/pointshape/intertwine/tracepolicy attributes mean?

Almost all tools are controlled by the ToolLoopManager class. This class receives UI events (mouse press/release, key down/up) from the sprite Editor (from DrawingState really). (E.g. when the user press a mouse button, the ToolLoopManager::pressButton is called.) This means that the ToolLoopManager could be used in isolation (e.g. by unit tests) to draw on the sprite simulating mouse buttons and keys.

Anyway the ToolLoopManager receives a delegate: a ToolLoop implementation. This class has several sub-delegates like an ink/controller/pointshape/intertwine/tracepolicy (these came from the gui.xml file configuration).

Each of these sub-delegates are used to control specific parts of the drawing process:

  1. The Controller creates the list of points to be connected/intertwined. For example, the FreehandController adds one point for each mouse movement, or the TwoPointsController is used to add just two points, the first one and the last one. (Used in line-like tools, e.g. line, rectangle, ellipse, etc.)
  2. The Intertwine joins those points generated by the Controller. The most common one is IntertwineAsLines, to join each sequential point with Bresenham’s lines. It’s used in freehand tools and line tool. Other interesting “intertwiner” is the IntertwineAsPixelPerfect dedicated to the pixel-perfect algorithm.
  3. For each generated pixel/point by the intertwiner, a PointShape is used. Its goal is to convert a pixel into scanlines for the current brush shape (or other shapes). There are three main point shapes:
  4. Each PointShape’s scanline is drawn using the Ink, specifically Ink::inkHline member function. Generally, ink implementations call a function defined in ink_processing.h. But those are a lot of details we prefer to avoid talking about.

The general idea here is that each tool is controlled by the union of these “mini” delegates: points from the mouse are pre-processed by the Controller, joined by the Intertwiner, converted to scanlines by the PointShape, and finally drawn by the Ink. And those elements can be combined in different ways. Does it mean that we can create our own tools in gui.xml? Yes, we could:

There are other (nasty) details like the trace policy: there are tools like the freehand that accumulate pixels, and other tools like the line that use the last line. Or the scroll tool that is handled by a specific Editor state, or the eyedropper tool that bypass the whole process using a dummy ink.

Tips: Sometimes you can separate a huge task in smaller classes (or functions) that can be dedicated to one step or specific goal in the whole algorithm. We are used to separate UI from business logic, but remember that you can separate UI (e.g. sprite Editor) from the same UI logic (ToolLoopManager). It’s a way to create UI logic without widgets.

Weekly Devblog Post

Let’s resuscitate this devblog.

Every Saturday I will try to dedicate a special blog post talking about the progress or development inside Aseprite. These posts will be a mix between some specific topic I want to talk about, or just a weekly report to know what have been done in these days. So let’s start.

From v1.1-beta3 some internal structures were changed. Here a summary of these changes (bellow I explain what each item means):

Skia port

Aseprite was initially implemented using Allegro 4 library. At a certain point I’ve forked and customized it to add some features.

There are several problems with Allegro 4: it’s not event based, and it uses deprecated technologies like DirectDraw and QuickDraw. So there is a real need to remove it from Aseprite.

Between branch 1.0 and master, I’ve wrapped Allegro 4 library inside the she layer. Then the program was ported to this she layer in a daily basis with several minor changes. Nowadays all app-specific code is using she, and this layer is the only one with direct access to Allegro 4.

Here is a diagram with the main Aseprite layers/modules/libraries (main, app, she, cfg, allegro, gfx, base):

Layers

There is an open issue to port she to other backend: Skia. This is a 2D graphics library currently developed by Google used in Chrome. Anyway Skia gives us only the 2D graphics part, we’ve to create the native windows by our selves. (Which at certain point is a good thing as we can have total control of the window behavior and platform-specific details.)

Tip: Every time you use a library here’s my advice: wrap it. Wrap everything you use. Create your own layer, use your own API, and implement that layer using the library. Sometimes you cannot wrap the library, if that happens to you is because: 1) the library is huge, or 2) you are in the presence of a framework. And as frameworks wrap your program, you cannot escape from them. So prefer libraries instead of frameworks, and wrap them all.

Native stuff

Along with the Skia port, I want to bring some specific native features depending on the platform. For example:

  • On Windows you can drag-and-drop files from Windows Explorer to Aseprite.
  • On OS X and Windows you can use native mouse cursor (Edit > Preferences > Experimental > Use native mouse cursor).

Other feature I’d like is to use native/classic open/save file dialogs. They’re already available in v1.1-beta3 for Windows (Edit > Preferences > Experimental > Use native file dialog), but now will be available on OS X as well.

Options/settings/preferences

There are 3 different internal ways to access to user options:

In the compilation process, the app::Preferences’s base class app::gen::GlobalPref is generated from pref.xml file using gen utility.

Tip: Always try to automatically generate code that you don’t want to repeat. User preferences generally include a setter, a getter, and observable signals for each option. Just avoid to write that code over and over again.

Memory leaks

From time to time I compile the program with the ENABLE_MEMLEAK option enabled. This flag overloads new, delete, base_malloc, and base_free functions to keep track of stack traces on each allocation.

When the program finishes, a txt file is generated with the list of stack traces from allocations that weren’t freed. It works perfectly with MSVC. You can know exactly which line is allocating memory that isn’t being freed.

tl;dr: a lot of internal stuff

Migration problems from Google Code to GitHub

Today we can say good bye to Google Code. Finally.

We’ve migrated all issues to GitHub. The first attempt of migration was a complete disaster. Sure, we tested the migration with some dummy accounts/projects, but real problems appear when you try your migration script with your project repository.

As a general rule, if you will move a Google Code project to GitHub, take care of these things:

  1. Undelete all deleted issues in your Google Code repository. Deleted issues are not migrated and numbers after them will be wrong in GitHub.
  2. If you have pull requests in your repository, you cannot do the migration. You cannot delete those pull requests and they are counted as issues in GitHub, so your Google Code issues will never match the numbers of GitHub issues.

Indeed, the decision is quite simple (and hard to take): Create a new repository, push all your branches and tags into it, do the migration of issues, and then delete (or rename) the old repository and rename the new one to the original name. For Aseprite, the old repository now is called aseprite-broken, and the new one is aseprite.

The good side: we have all the GitHub features for issues (markdown formatting, links between commits, pull requests, users, autocomplete, etc.).

The bad side: we lost all those juicy stars, watchers, and forks.

The ugly side: you cannot upload binary files to issues (only images). We will talk about this in a next post.

Big changes without short-term benefits

Last year we were making a lot of refactoring tasks. As you know, improving code doesn’t affect users in a direct way (and that sucks) but anyway these changes are good for our future (or should be).

There are changes like these ones, where old C structs (jrect and jregion) are replaced with classes (gfx::Rect and gfx::Region). The main issue here was that we were using jrect_new()/jrect_free() and jregion_new()/jregion_free() functions to allocate these structures everywhere. They come from C code and it is a pretty common C pattern: You allocate and free everything manually. Now we use gfx::Rect/Region with value semantics and all those free() are gone. Recommendation: Always prefer value semantics to automatically avoid memory leaks. If your struct is huge, use RAII idiom.

Other big change. This commit is a shame, it contains several changes in one patch, never do that (and sometimes we don’t follow the rules). The main change here is the addition of lock/unlock semantic to raster::Image access. You should never access to image data directly, i.e. using a char* pointer. At least, you should not be able to access to that char* before you lock the image. The right way to access image data is:

  1. lock the region you want to read/write (the lock operation should specify if you are going to write or it’s just read-only),
  2. get your pretty pointer to pixels (or an iterator in our case),
  3. read/modify the data,
  4. unlock the data.

These steps are necessary if you want to change the underlying implementation of the image pixels (e.g. splitting the image in several tiles). In this way, the lock operation can convert your fancy pixels representation to something simpler to iterate, then, the unlock operation can copy those pixels back in the complex representation (only if the lock was a write operation).

Recommendation: Always separate the internal representation of an image from the way you access to that image. Use lock/unlock semantic as it is good for parallelizing work and simplifying image access.

These changes were part of the evolution of Aseprite. They were needed 1) to remove old code and improve productivity-time working with new structures (that avoid memory leaks automatically), and 2) to start adding layers of abstractions (e.g. abstract image pixels access) so we can think in future/better internal data structures for representations of images.

Why?

This is the first thing I want to talk about: Why Aseprite? Why open source? Why C++? Why am I programming this?

First of all, I’ve started programming graphics software in 1998, several failed attempts before the original Aseprite source code was written. One of my brothers (Martin) inspired me to create this kind of programs at the time. He created a tool to draw graphics for his games, and I followed his footsteps.

Around 2001, I released the first version of Aseprite (known as ASE). Take a look to this baby:

This monster was programmed using Allegro library and C language. The code base evolved, after some years I switched to C++ (anyway there are a lot of legacy C code/design yet). The good thing about ASE was that from the very first version it already had “infinite” undo/redo (I’ll talk about this in future posts).

Anyway, the question remains. Why? Well, it seems that I liked to program, and I liked to program graphics software a lot (more than games). So I continue the project just like a hobby. I made it open source just because I didn’t have enough time to give serious support to this. Also, with programs like GIMP it didn’t make sense to make it closed at the time. (Right now it looks quite different. In following posts, I’ll give details about the future of Aseprite and its source code.)

Why C++? After five years, I decided to switch to C++ because the low capabilities to create maintainable code in pure C language. If you have an application, you should use C++, there are a large number of advantages: it gives you more abstraction power to represent ideas, to improve development time, to avoid programming mistakes (e.g. memory leaks), to avoid duplicated code (with better techniques than macros), to write high-performance code, to organize code, etc.

To sum up, at the beginning Aseprite was programmed just for fun, to learn some fancy programming techniques in C, and then in C++. Now it is a tool with a lot of potential, used by several indie game developers. So there are new paths that we have to take.

P.S.: You can download and try old versions of the Aseprite from here. (You must use DOSBox to run most of them.)