May 28, 2016

Scarlett Clark

Debian: Outreachy, Debian Reproducible builds Week 1 Progress Report

It has been an exciting first week of my internship. I was able to produce a few patches,
and submitted upstream, as well as into debian packaging. I am hopeful they will get accepted,
preferably upstream so all can benefit!

kapptemplate:
https://bugs.kde.org/show_bug.cgi?id=363448
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825122

choqok:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825322

However, after speaking with Lisandro ( choqok maintainer ) I decided a better course of action
is to try and fix the actual source of the problem, kconfig_compiler from kde4libs is generating
non utf-8 cpp and header files under certain conditions like an environment that does not have a
locale set. Of course, I have some help with this from wonderful folks in KDE, which is good
because kde4libs codebase is HUGE! So I hope to have two new bugs early next week for choqok.

I checked up on my existing kdevplatform bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815962

And noticed it has not received attention, so I created upstream bug:
https://bugs.kde.org/show_bug.cgi?id=363615

And updated the debian bug patch DEP 3 with upstream bug url.

I have been working on kdevelop-php without success yet, looks like build-id
( I still need to find the source ) and embedded kernel which I think I found,
though I will reach out to my awesome mentor to get some help on this one.

I did not knock out quite as many builds as I wanted, but I picked some hard ones
that are new to me 🙂 So in the end it was a very successful first week, because I
have learned several new things that will help me with future reproducible builds.

Have a great weekend.

28 May, 2016 12:00AM by Scarlett Clark

May 27, 2016

Mike Gabriel

MATE 1.14 landing in Debian unstable...

I just did a bundle upload of all MATE 1.14 related packages to Debian unstable. Packages are currently building for the 23 architectures supported by Debian, build status can be viewed on the DDPO page of the Debian MATE Packaging Team [1]

Credits

Again a big thanks to the packaging team. Martin Wimpress again did a fabulous job in bumping all packages towards the 1.14 release series during the last weeks. During last week, I reviewed his work and uploaded all binary packages to a staging repository.

Also a big thanks to Vangelis Mouhtsis, who recently added more hardening support to all those MATE packages that do some sort of C compilation at build time.

After testing all MATE 1.14 packages on a Debian unstable system, I decided to do a bundle upload today. Packages should be falling out of the build daemons within the next couple of hours/days (depending on the architecture being built for).

GTK2 -> GTK3

The greatest change for this release of MATE to Debian is the switch over from GTK2 to GTK3.

People using the MATE desktop environment on Debian systems are invited to test the new MATE 1.14 packages and give feedback via the Debian bug tracker, esp. on the user experience regarding the switch over to GTK3.

Thanks to all who help getting MATE 1.14 in Debian better every day!!!

Known issues when running in NXv3 sessions

The new GTK3 build of MATE works fine locally (against local X.org server). However, it causes some trouble (i.e. graphical glitches) when running in an NXv3 based remote desktop session. Those issues have to be addressed by me (while wearing my NXv3 upstream hat), I guess (sigh...).

light+love,
Mike

[1] https://qa.debian.org/developer.php?login=pkg-mate-team@lists.alioth.deb...

27 May, 2016 01:11PM by sunweaver

Patrick Matthäi

Packages updates from may

There are some news on my packaging work from may:

  • OTRS
    • I have updated it to version 5.0.10
    • Also I have updated the jessie backports version from 5.0.8 to 5.0.10
    • I have to test the new issue #825291 (database update with Postgres fails UTF-8 Perl error), maybe someone has got an idea?
  • needrestart
    • Thanks to Thomas Liske (upstream author) for adressing mostly all open bugs and wishes from the Debian BTS and Github. Version 2.8 fixes 6 Debian bugs
    • Already available in jessie-backports :)
  • geoip-database
    • As usual package updated and uploaded to jessie-backports and wheezy-backports-sloppy
  • geoip
    • Someone here interested in fixing #811767 with GCC 6? I were not able to fix it
    • .. and if it compiles, the result segfaults :(
  • fglrx-driver
    • I have removed the fglrx-driver from the Debian sid/stretch repository
    • This means that fglrx in Debian is dead
    • You should use the amdgpu driver instead :)
  • icinga2
    • After some more new upstream releases I have updated the jessie-backports version to 2.4.10 and it works like a charm :)

27 May, 2016 09:18AM by the-me

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 0.1.9

rfoaas greed example

Time for new release! We just updated rfoaas on CRAN, and it now corresponds to version 0.1.9 of the FOAAS API.

The rfoaas package provides an interface for R to the most excellent FOAAS service--which provides a modern, scalable and RESTful web service for the frequent need to tell someone to f$#@ off.

Release 0.1.9 brings three new access point functions: greed(), me() and morning(). It also adds an S3 print method for the returned object. A demo of first of these additions in shown in the image in this post.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 May, 2016 02:02AM

May 26, 2016

Iustin Pop

First run in 2016

Today I finally ran a bit outside, for the first time in 2016. Actually, for even longer—the first run since May 2015. I have been only biking in the last year, so this was a very pleasant change of pace (hah), even if just a short run (below 4K).

The funny thing is that since I've been biking consistently (and hard) in the last two months, my fitness level is reasonable, so I managed to beat my all-time personal records for 1 Km and 1 mile (I never sprint, so these are just 'best of' segments out of longer runs). It's probably because I only did ~3.8Km, but still, I was very surprised, since I planned and did an easy run. How could I beat my all-time PR, even better than the times back in 2012 when I was doing regular running?

Even the average pace over the entire run was better than my last training runs (~5Km) back in April/May 2015, by 15-45s.

I guess cross-training does work after all, at least when competing against myself ☺

26 May, 2016 09:06PM

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Do you want Qt5's QWebEngine in Debian? Do you have library packaging skills? If so, step up!

So far the only missing submodule in Debian's Qt5 stack is QtWebEngine. None of us the current Qt maintainers have the time/will to do the necessary stuff to have it properly packaged.

So if you would like to have QtWebEngine in Debian and:

  • You have C++ libraries' packaging skills.
  • You have a powerful enough machine/enough patience to do the necessary builds (8+ GB RAM+swap required).
  • You are willing to deal with 3rd party embedded software.
  • You are willing to keep up with security fixes.
  • You are accessible through IRC and have the necessary communications skills to work together with the rest of the team.
Then you are the right person for this task. Do not hesitate in pinging me on #debian-kde, irc.oftc.net.

26 May, 2016 02:51PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Michael Prokop

Michael Prokop

My talk at OSDC 2016: Continuous Integration in Data Centers – Further 3 Years Later

Open Source Data Center Conference (OSDC) was a pleasure and great event, Netways clearly knows how to run a conference.

This year at OSDC 2016 I gave a talk titled “Continuous Integration in Data Centers – Further 3 Years Later“. The slides from this talk are available online (PDF, 6.2MB). Thanks to Netways folks also a recording is available:

This embedded video doesn’t work for you? Try heading over to YouTube.

Note: my talk was kind of an update and extension for the (german) talk I gave at OSDC 2013. If you’re interested, the slides (PDF, 4.3MB) and the recording (YouTube) from my talk in 2013 are available online as well.

26 May, 2016 07:06AM by mika

hackergotchi for Norbert Preining

Norbert Preining

Shotwell vs. digiKam

How to manage your photos? – That is probably the biggest question for anyone doing anything with a photo camera. As resolutions of cameras grow, the data we have to manage is growing ever. In my case I am talking about more than 50000 photos and videos measuring up to about 200Gb of disk space, constantly growing. There are several photo management softwares out there, I guess the most commonly used ones are Shotwell for the Gnome desktop, digiKam for the KDE world, and FotoXX. I have not used Shotwell and digiKam for quite some time, and collect here my experiences of strength and weaknesses of the two programs. FotoXX seems to be very powerful, too, but I haven’t tested it till now.
shotwell-digikam

There is no clear winner here, unfortunately. Both have their strength and their weaknesses. And as a consequence I am using both in parallel.

Before I start a clear declaration: I have been using Shotwell for many years, and have myself contributed considerable code to Shotwell, in particular the whole comment system (comments for photos and events), as well as improved the Piwigo upload features. I started using digiKam some month ago when I started to look for offloading parts of my photo library to external devices. Since then I have used both in parallel.

Let us start with what these programs say about themselves:

Shotwell is declared as a Photo Manager for Gnome 3, with the following features:

  • Import from disk or camera
  • Organize by time-based Events, Tags (keywords), Folders, and more
  • View your photos in full-window or fullscreen mode
  • Crop, rotate, color adjust, straighten, and enhance photos
  • Slideshow
  • Video and RAW photo support
  • Share to major Web services, including Facebook, Flickr, and YouTube

digiKam says about itself that it is an advanced digital photo management application for Linux, Windows, and Mac-OSX. It has a very long feature page with a short list at the beginning:

  • import pictures
  • organize your collection
  • view items
  • edit and enhance
  • create (slideshows, calendar, print, …)
  • share your creations (using social web services, email, your own web allery, …)

Now that sounds like they are very similar, but upon using them it turns out that there are huge differences, that can easily be summed up in a short statement:

Shotwell is Gnome 3 – that means – get rid of functionality.

digiKam is KDE – that means – provide as much functionality as possible.

Now before you run after me with a knife because you do not agree with me on the above, either read on, or stop reading. I am not interested in flame wars over Gnome versus KDE philosophy. I have been using Gnome since many years, and tried to convince myself to G3 for more than a year – until I threw out all of it but selected programs – but their number is going down.

Let us look at those aspects I am using: organization, offline, sharing, editing.

Organization

In Shotwell, your photos are organized into events, independent from their location on disk. These events can have title and comment and collect a set of related photos. In my case I often have photos from two or more cameras (my camera and mobile, photos of friends), which I keep in separate directories within a main directory for the event. For example I have a folder 2016/05.21-22.Climbing.Tanigakadake with two sub-folders Norbert (for my photos) and Friend (for my friends photos).

In Shotwell all the photos are in the same event, which is shown with title 05.21-22 Tanigakawadake Climbing within the 2016 year and May month.
shotwell-events

So in short – Shotwell distinguishes between disk layout and album/event names.

In digiKam there is a strict connection between disk layout and album names – 1:1. Albums are directories. One can adjust the viewer to show all photos of sub-albums in the main album, and by this one can achieve the same effect of merging all photos of my friend and myself. The good thing in this approach is that one can easily have sub-albums: Imagine a trip to three different islands of Hawaii during one trip. This is something easy to achieve in digiKam, but hard in Shotwell.
digikam-albums

Other organization methods

Both Shotwell and digiKam support tags, including hierarchical tags and rating (0-5 stars). Shotwell has in addition a quick flag action that I used quite often for initial selecting photos, as well as accepted and rejected. digiKam also has so called “picks” (no pick, reject, pending, accepted), and “colors” (not used by now). Both programs have some face detection support, but also this I haven’t used.

So in the organization respect there is no clear winner. I like the Event idea of Shotwell, or better, the separation of events from the disk structure. But on the other hand, Shotwell does not allow for sub-albums, which is also a pain.

No clear winner – draw

Offline storage

That is simple: Shotwell: forget it, not reasonably possible. One can move parts to an external HD, then unplug it, and Shotwell will tell you that all the photos are missings. And when you plug the external HD in, it will redetect them. But this is not proper support, just a consequence of hash sum storage. Also separation into several libraries (online and offline) is not supported.

On the other hand, digiKam supports multiple libraries, partly offline, without a hinch. I would love to have this feature in Shotwell, because I need to free disk space, urgently!!!

Clear winner: digiKam

Sharing

Again here my testing is very restricted – I am using my own Piwigo installation exclusively. Here Shotwell is excellent in providing support for various features: upload to existing category, create new category, resize, optionally remove tags, add comments to albums and photos, etc. (partly implemented by me 😉
shotwell-piwigo

On the other hand digiKam has a very barebone interface to Piwigo – you can upload photos to an existing album and resize them, but that is already everything. One also needs to say that the list of supported services in digiKam is by far longer than the one in Shotwell, but the main services (usual suspects) are supported in both.
digikam-piwigo

Clear winner: Shotwell (but I haven’t tested other upload services).

Editing

The editing capabilities of Shotwell are again very restricted: red eyes, resize, levels, ….
shotwell-editing

digiKam here is more like a photo editor software with loads of tools and features. I haven’t even explored all the options – maybe I can get rid of GIMP, too?
digikam-editing

Clear winner: digiKam

Conclusions

While I am still working with both, I actually would love to move completely to digiKam. I simply cannot stand the Gnome 3 philosophy of reducing functionality to a minimal for dummy users. There is a market for that, sure, but I am not one of it. Unfortunately, the missing Event support, and much more the completely minimal support for Piwigo sharing in digiKam is a big show stopper at the moment. Even after testing the upcoming digiKam 5.0 version for some time, I didn’t see any improvement wrt to sharing.

That leaves me with the only option to continue working with both programs, and hope to get a new bigger SSD before the current one runs out of space. Of course I could start hacking into the digiKam source – maybe I do this when I have a bit more time – to add proper Piwigo support.

26 May, 2016 03:33AM by Norbert Preining

May 25, 2016

Petter Reinholdtsen

Isenkram with PackageKit support - new version 0.23 available in Debian unstable

The isenkram system is a user-focused solution in Debian for handling hardware related packages. The idea is to have a database of mappings between hardware and packages, and pop up a dialog suggesting for the user to install the packages to use a given hardware dongle. Some use cases are when you insert a Yubikey, it proposes to install the software needed to control it; when you insert a braille reader list it proposes to install the packages needed to send text to the reader; and when you insert a ColorHug screen calibrator it suggests to install the driver for it. The system work well, and even have a few command line tools to install firmware packages and packages for the hardware already in the machine (as opposed to hotpluggable hardware).

The system was initially written using aptdaemon, because I found good documentation and example code on how to use it. But aptdaemon is going away and is generally being replaced by PackageKit, so Isenkram needed a rewrite. And today, thanks to the great patch from my college Sunil Mohan Adapa in the FreedomBox project, the rewrite finally took place. I've just uploaded a new version of Isenkram into Debian Unstable with the patch included, and the default for the background daemon is now to use PackageKit. To check it out, install the isenkram package and insert some hardware dongle and see if it is recognised.

If you want to know what kind of packages isenkram would propose for the machine it is running on, you can check out the isenkram-lookup program. This is what it look like on a Thinkpad X230:

% isenkram-lookup 
bluez
cheese
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tleds
tp-smapi-dkms
tp-smapi-source
tpb
%p

The hardware mappings come from several places. The preferred way is for packages to announce their hardware support using the cross distribution appstream system. See previous blog posts about isenkram to learn how to do that.

25 May, 2016 08:20AM

May 24, 2016

Carl Chenet

Tweet your database with db2twitter

Follow me also on Diaspora*diaspora-banner or Twitter 

You have a database (MySQL, PostgreSQL, see supported database types), a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

A quick example of a tweet generated by db2twitter:

db2twitter

The new version 0.6 offers the support of tweets with an image. How cool is that?

 

db2twitter is developed by and run for LinuxJobs.fr, the job board of th french-speaking Free Software and Opensource community.

banner-linuxjobs-small

db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

 


24 May, 2016 10:00PM by Carl Chenet

Mike Gabriel

Arctica Project: The Telekinesis Framework, coming soon...

The Arctica Project is a task force of people reinventing the realm of remote desktop computing on Linux. One core component for multimedia experience in remote desktop / application scenarios is the to-be-reloaded / upcoming Telekinesis Framework.

Telekinesis provides a framework for developing GUI applications that have a client and server side component. Those applications are visually merged and presented to the end user in such a way that the end user's “user experience” is the same as if the user was interacting with a strictly server side application. Telekinesis mediates the communication between those server side and client side application parts.

As a reference implementation you can imagine a server side media player GUI (TeKi-aware application) and a client side video overlay (corresponding TeKi-aware service). The media player GUI "remote-controls" the client side video overlay. The video overlay receives its video stream from the server. All these interactions are mediated through Telekinesis.

A proof of concept has been developed for X2Go in 2012. For the Arctica Server, we are currently doing a (much cleaner!) rewrite of the original prototype [1]. See [2] for the first whitepaper describing how to integrate Telekinesis into existing remote desktop solutions. See [3] for a visual demonstration of the potentials of Telekinesis (still using X2Go underneath and the original Telekinesis prototype).

The heavy lifting around Telekinesis development and conceptual design is performed by my project partner Lee from GZ Nianguan FOSS Team [4]. Thanks for continuously giving your time and energy into the co-founding of the Arctica Project. Thanks for always reminding me of doing benchmarks!!!

light+love,
Mike

[1] http://code.x2go.org/gitweb?p=telekinesis.git;a=summary
[2] https://github.com/ArcticaProject/ArcticaDocs/blob/master/Telekinesis/Te...
[3] https://www.youtube.com/watch?v=57AuYOxXPRU
[4] https://github.com/gznget

24 May, 2016 08:21PM by sunweaver

Thorsten Alteholz

Debian and the Internet of Things

Everybody is talking about the Internet of Things. Unfortunately there is no sign of it in Debian yet. Besides some smaller packages like sispmctl, usbrelay or the 1-wire support in digitemp and owfs, there is not much software to control devices over a network.

With the recent upload of alljoyn-core-1504 this might change.

The Alljoyn Framework, where the Alljoyn Core is just one of several modules, lets devices and applications detect each other and communicate with one another over a D-Bus like message bus. The development of the framework has been started by Qualcomm some years ago and is meanwhile managed by the AllSeen Alliance, a nonprofit consortium. The software is licensed under the ISC license.

This first upload is just the first step of a long journey. Other modules that compose the framework and already have a released tarball are related to lightning products, gateways to overcome the boundaries of the local network and much more. In the near future it is also planned to have modules that attach Z-Wave-, ZigBee- or Bluetooth-devices to the Alljoyn bus.

So all in all, this looks like an exciting task and everybody is invited to help maintaining the software in Debian.

24 May, 2016 06:04PM by alteholz

hackergotchi for Michal Čihař

Michal Čihař

Gammu release day

There has been some silence on the Gammu release front and it's time to change that. Today all Gammu, python-gammu and Wammu have been released. As you might guess all are bugfix releases.

List of changes for Gammu 1.37.3:

  • Improved support for Huawei E398.
  • Improved support for Huawei/Vodafone K4505.
  • Fixed possible crash if SMSD used in library.
  • Improved support for Huawei E180.

List of changes for python-gammu 2.6:

  • Fixed error when creating new contact.
  • Fixed possible testsuite errors.

List of changes for Wammu 0.41:

  • Fixed crash with unicode home directory.
  • Fixed possible crashes in error handler.
  • Improved error handling when scanning for Bluetooth devices.

All updates are also on their way to Debian sid and Gammu PPA.

Would you like to see more features in Gammu family? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu python-gammu Wammu | 0 comments

24 May, 2016 04:00PM

hackergotchi for Alberto García

Alberto García

I/O bursts with QEMU 2.6

QEMU 2.6 was released a few days ago. One new feature that I have been working on is the new way to configure I/O limits in disk drives to allow bursts and increase the responsiveness of the virtual machine. In this post I’ll try to explain how it works.

The basic settings

First I will summarize the basic settings that were already available in earlier versions of QEMU.

Two aspects of the disk I/O can be limited: the number of bytes per second and the number of operations per second (IOPS). For each one of them the user can set a global limit or separate limits for read and write operations. This gives us a total of six different parameters.

I/O limits can be set using the throttling.* parameters of -drive, or using the QMP block_set_io_throttle command. These are the names of the parameters for both cases:

-drive block_set_io_throttle
throttling.iops-total iops
throttling.iops-read iops_rd
throttling.iops-write iops_wr
throttling.bps-total bps
throttling.bps-read bps_rd
throttling.bps-write bps_wr

It is possible to set limits for both IOPS and bps at the same time, and for each case we can decide whether to have separate read and write limits or not, but if iops-total is set then neither iops-read nor iops-write can be set. The same applies to bps-total and bps-read/write.

The default value of these parameters is 0, and it means unlimited.

In its most basic usage, the user can add a drive to QEMU with a limit of, say, 100 IOPS with the following -drive line:

-drive file=hd0.qcow2,throttling.iops-total=100

We can do the same using QMP. In this case all these parameters are mandatory, so we must set to 0 the ones that we don’t want to limit:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0
     }
   }

I/O bursts

While the settings that we have just seen are enough to prevent the virtual machine from performing too much I/O, it can be useful to allow the user to exceed those limits occasionally. This way we can have a more responsive VM that is able to cope better with peaks of activity while keeping the average limits lower the rest of the time.

Starting from QEMU 2.6, it is possible to allow the user to do bursts of I/O for a configurable amount of time. A burst is an amount of I/O that can exceed the basic limit, and there are two parameters that control them: their length and the maximum amount of I/O they allow. These two can be configured separately for each one of the six basic parameters described in the previous section, but here we’ll use ‘iops-total’ as an example.

The I/O limit during bursts is set using ‘iops-total-max’, and the maximum length (in seconds) is set with ‘iops-total-max-length’. So if we want to configure a drive with a basic limit of 100 IOPS and allow bursts of 2000 IOPS for 60 seconds, we would do it like this (the line is split for clarity):

   -drive file=hd0.qcow2,
          throttling.iops-total=100,
          throttling.iops-total-max=2000,
          throttling.iops-total-max-length=60

Or with QMP:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0,
        "iops_max": 2000,
        "iops_max_length": 60,
     }
   }

With this, the user can perform I/O on hd0.qcow2 at a rate of 2000 IOPS for 1 minute before it’s throttled down to 100 IOPS.

The user will be able to do bursts again if there’s a sufficiently long period of time with unused I/O (see below for details).

The default value for ‘iops-total-max’ is 0 and it means that bursts are not allowed. ‘iops-total-max-length’ can only be set if ‘iops-total-max’ is set as well, and its default value is 1 second.

Controlling the size of I/O operations

When applying IOPS limits all I/O operations are treated equally regardless of their size. This means that the user can take advantage of this in order to circumvent the limits and submit one huge I/O request instead of several smaller ones.

QEMU provides a setting called throttling.iops-size to prevent this from happening. This setting specifies the size (in bytes) of an I/O request for accounting purposes. Larger requests will be counted proportionally to this size.

For example, if iops-size is set to 4096 then an 8KB request will be counted as two, and a 6KB request will be counted as one and a half. This only applies to requests larger than iops-size: smaller requests will be always counted as one, no matter their size.

The default value of iops-size is 0 and it means that the size of the requests is never taken into account when applying IOPS limits.

Applying I/O limits to groups of disks

In all the examples so far we have seen how to apply limits to the I/O performed on individual drives, but QEMU allows grouping drives so they all share the same limits.

This feature is available since QEMU 2.4. Please refer to the post I wrote when it was published for more details.

The Leaky Bucket algorithm

I/O limits in QEMU are implemented using the leaky bucket algorithm (specifically the “Leaky bucket as a meter” variant).

This algorithm uses the analogy of a bucket that leaks water constantly. The water that gets into the bucket represents the I/O that has been performed, and no more I/O is allowed once the bucket is full.

To see the way this corresponds to the throttling parameters in QEMU, consider the following values:

  iops-total=100
  iops-total-max=2000
  iops-total-max-length=60
  • Water leaks from the bucket at a rate of 100 IOPS.
  • Water can be added to the bucket at a rate of 2000 IOPS.
  • The size of the bucket is 2000 x 60 = 120000.
  • If iops-total-max is unset then the bucket size is 100.

bucket

The bucket is initially empty, therefore water can be added until it’s full at a rate of 2000 IOPS (the burst rate). Once the bucket is full we can only add as much water as it leaks, therefore the I/O rate is reduced to 100 IOPS. If we add less water than it leaks then the bucket will start to empty, allowing for bursts again.

Note that since water is leaking from the bucket even during bursts, it will take a bit more than 60 seconds at 2000 IOPS to fill it up. After those 60 seconds the bucket will have leaked 60 x 100 = 6000, allowing for 3 more seconds of I/O at 2000 IOPS.

Also, due to the way the algorithm works, longer burst can be done at a lower I/O rate, e.g. 1000 IOPS during 120 seconds.

Acknowledgments

As usual, my work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the QEMU development team.

igalia-outscale

Enjoy QEMU 2.6!

24 May, 2016 11:47AM by berto

May 23, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

PostBooks, PostgreSQL and pgDay.ch talk

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

23 May, 2016 05:35PM by Daniel.Pocock

Enrico Zini

I chipped in

I clicked on a random link and I found myself again in front of a wired.com popup that wanted to explain to me what I have to think about adblockers.

This time I was convinced, and I took my wallet out.

I finally donated $35 to AdBlock.

(And then somebody pointed me to uBlock Origin and I switched to that.)

23 May, 2016 12:45PM

Petter Reinholdtsen

Discharge rate estimate in new battery statistics collector for Debian

Yesterday I updated the battery-stats package in Debian with a few patches sent to me by skilled and enterprising users. There were some nice user and visible changes. First of all, both desktop menu entries now work. A design flaw in one of the script made the history graph fail to show up (its PNG was dumped in ~/.xsession-errors) if no controlling TTY was available. The script worked when called from the command line, but not when called from the desktop menu. I changed this to look for a DISPLAY variable or a TTY before deciding where to draw the graph, and now the graph window pop up as expected.

The next new feature is a discharge rate estimator in one of the graphs (the one showing the last few hours). New is also the user of colours showing charging in blue and discharge in red. The percentages of this graph is relative to last full charge, not battery design capacity.

The other graph show the entire history of the collected battery statistics, comparing it to the design capacity of the battery to visualise how the battery life time get shorter over time. The red line in this graph is what the previous graph considers 100 percent:

In this graph you can see that I only charge the battery to 80 percent of last full capacity, and how the capacity of the battery is shrinking. :(

The last new feature is in the collector, which now will handle more hardware models. On some hardware, Linux power supply information is stored in /sys/class/power_supply/ACAD/, while the collector previously only looked in /sys/class/power_supply/AC/. Now both are checked to figure if there is power connected to the machine.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. Patches are very welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

23 May, 2016 07:35AM

May 22, 2016

Reproducible builds folks

Reproducible builds: week 56 in Stretch cycle

What happened in the Reproducible Builds effort between May 15th and May 21st 2016:

Media coverage

Blog posts from our GSoC and Outreachy contributors:

Documentation update

Ximin Luo clarified instructions on how to set SOURCE_DATE_EPOCH.

Toolchain fixes

  • Joao Eriberto Mota Filho uploaded txt2man/1.5.6-4, which honours SOURCE_DATE_EPOCH to generate reproducible manpages (original patch by Reiner Herrmann).
  • Dmitry Shachnev uploaded sphinx/1.4.1-1 to experimental with improved support for SOURCE_DATE_EPOCH (original patch by Alexis Bienvenüe).
  • Emmanuel Bourg submitted a patch against debhelper to use a fixed username while building ant packages.

Other upstream fixes

  • Doxygen merged a patch by Ximin Luo, which uses UTC as timezone for embedded timestamps.
  • CMake applied a patch by Reiner Herrmann in their next branch, which sorts file lists obtained with file(GLOB).
  • GNU tar 1.29 with support for --clamp-mtime has been released upstream, closing #816072, which was the blocker for #759886 "dpkg-dev: please make mtimes of packaged files deterministic" which we now hope will be closed soon.

Packages fixed

The following 18 packages have become reproducible due to changes in their build dependencies: abiword angband apt-listbugs asn1c bacula-doc bittornado cdbackup fenix gap-autpgrp gerbv jboss-logging-tools invokebinder modplugtools objenesis pmw r-cran-rniftilib x-loader zsnes

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

  • bzr/2.7.0-6 by Jelmer Vernooij.
  • libsdl2/2.0.4+dfsg2-1 by Manuel A. Fernandez Montecelo.
  • pvm/3.4.5-13 by James Clarke.
  • refpolicy/2:2.20140421-11 by Laurent Bigonville.
  • subvertpy/0.9.3-4 by Jelmer Vernooij.

Patches submitted that have not made their way to the archive yet:

  • #824413 against binutils by Chris Lamb: filter build user and date from test log case-insensitively
  • #824452 against python-certbot by Chris Lamb: prevent PID from being embedded into documentation (forwarded upstream)
  • #824453 against gtk-gnutella by Chris Lamb: use SOURCE_DATE_EPOCH for deterministic timestamp (merged upstream)
  • #824454 against python-latexcodec by Chris Lamb: fix for parsing the changelog date
  • #824472 against torch3 by Alexis Bienvenüe: sort object files while linking
  • #824501 against cclive by Alexis Bienvenüe: use SOURCE_DATE_EPOCH as embedded build date
  • #824567 against tkdesk by Alexis Bienvenüe: sort order of files which are parsed by mkindex script
  • #824592 against twitter-bootstrap by Alexis Bienvenüe: use shell-independent printing
  • #824639 against openblas by Alexis Bienvenüe: sort object files while linking
  • #824653 against elkcode by Alexis Bienvenüe: sort list of files locale-independently
  • #824668 against gmt by Alexis Bienvenüe: use SOURCE_DATE_EPOCH for embedded timestamp (similar patch by Bas Couwenberg already applied and forwarded upstream)
  • #824808 against gdal by Alexis Bienvenüe: sort object files while linking
  • #824951 against libtomcrypt by Reiner Herrmann: use SOURCE_DATE_EPOCH for timestamp embedded into metadata

Reproducibility-related bugs filed:

  • #824420 against python-phply by ceridwen: parsetab.py file is not included when building with DEB_BUILD_OPTIONS="nocheck"
  • #824572 against dpkg-dev by XImin Luo: request to export SOURCE_DATE_EPOCH in /usr/share/dpkg/*.mk.

Package reviews

51 reviews have been added, 19 have been updated and 15 have been removed in this week.

22 FTBFS bugs have been reported by Chris Lamb, Santiago Vila, Niko Tyni and Daniel Schepler.

tests.reproducible-builds.org

Misc.

  • During the discussion on debian-devel about PIE, an archive rebuild was suggested by Bálint Réczey, and Holger Levsen suggested to coordinate this with a required archive rebuild for reproducible builds.
  • Ximin Luo improved misc.git/reports (=the tools to help writing the weekly statistics for this blog) quite a bit, h01ger contributed a little too.

This week's edition was written by Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

22 May, 2016 09:44PM

Antonio Terceiro

Adopting pristine-tar

As of yesterday, I am the new maintainer of pristine-tar. As it is the case for most of Joey Hess’ creations, it is an extremely useful tool, and used in a very large number of Debian packages which are maintained in git.

My first upload was most of a terrain recognition nature: I did some housekeeping tasks, such as making the build idempotent and making sure all binaries are built with security hardening flags, and wrote a few automated test cases to serve as build-time and run-time regression test suite. No functional changes have been made.

As Joey explained when he orphaned it, there are a few technical challenges involved in making sure pristine-tar stays useful in the future. Although I did read some of the code, I am not particularly familiar with the internals yet, and will be more than happy to get co-maintainers. If you are interested, please get in touch. The source git repository is right there.

22 May, 2016 02:02PM

May 21, 2016

Petter Reinholdtsen

French edition of Lawrence Lessigs book Cultura Libre on Amazon and Barnes & Noble

A few weeks ago the French paperback edition of Lawrence Lessigs 2004 book Cultura Libre was published. Today I noticed that the book is now available from book stores. You can now buy it from Amazon ($19.99), Barnes & Noble ($?) and as always from Lulu.com ($19.99). The revenue is donated to the Creative Commons project. If you buy from Lulu.com, they currently get $10.59, while if you buy from one of the book stores most of the revenue go to the book store and the Creative Commons project get much (not sure how much less).

I was a bit surprised to discover that there is a kindle edition sold by Amazon Digital Services LLC on Amazon. Not quite sure how that edition was created, but if you want to download a electronic edition (PDF, EPUB, Mobi) generated from the same files used to create the paperback edition, they are available from github.

21 May, 2016 08:50AM

May 20, 2016

Zlatan Todorić

4 months of work turned into GNOME, Debian testing based tablet

Huh, where do I start. I started working for a great CEO and great company known as Purism. What is so great about it? First of all, CEO (Todd Weaver), is incredible passionate about Free software. Yes, you read it correctly. Free software. Not Open Source definition, but Free software definition. I want to repeat this like a mantra. In Purism we try to integrate high-end hardware with Free software. Not only that, we want our hardware to be Free as much as possible. No, we want to make it entirely Free but at the moment we don't achieve that. So instead going the way of using older hardware (as Ministry of Freedom does, and kudos to them for making such option available), we sacrifice this bit for the momentum we hope to gain - that brings growth and growth brings us much better position when we sit at negotiation table with hardware producers. If negotiations even fail, with growth we will have enough chances to heavily invest in things such as openRISC or freeing cellular modules. We want to provide in future entirely Free hardware&software device that has integrated security and privacy focus while it is easy to use and convenient as any other mainstream OS. And we choose to currently sacrifice few things to stay in loop.

Surely that can't be the only thing - and it isn't. Our current hardware runs entirely on Free software. You can install Debian main on it and all will work out of box. I know I did this and enjoy my Debian more than ever. We also have margin share program where part of profit we donate to Free software projects. We are also discussing a lot of new business model where our community will get a lot of influence (stay tuned for this). Besides all this, our OS (called PureOS - yes, a bit misfortune that we took the name of dormant distribution), was Trisquel based but now it is Debian testing based. Current PureOS 2.0 is coming with default DE as Cinnamom but we are already baking PureOS 3.0 which is going to come with GNOME Shell as default.

Why is this important? Well, around 12 hours ago we launched a tablet campaign on Indiegogo which comes with GNOME Shell and PureOS as default. Not one, but two tablets actually (although we heavily focus on 11" one). This is the product of mine 4 months dedicated work at Purism. I must give kudos to all Purism members that pushed their parts in preparation for this campaign. It was hell of a ride.

Librem11

I have also approached (of course!) Debian for creation of OEM installations ISOs for our Librem products. This way, with every sold Librem that ships with Debian preinstalled, Debian will get donation. It is our way to show gratitude to Debian for all the work our community does (yes, I am still extremely proud Debian dude and I will stay like that!). Oh yes, I am the chief technology person at Purism, and besides all goals we have, I also plan (dream) about Purism being the company that has highest number of Debian Developers. In that terms I am very proud to say that Matthias Klumpp became part of Purism. Hopefully we soon extend the number of Debian population in Purism.

Of course, I think it is fairly known that I am easy to approach so if anyone has any questions (as I didn't want this post to be too long) feel free to contact me. Also - in Free software spirit - we welcome any community engagement, suggestion and/or feedback.

20 May, 2016 01:47PM by Zlatan Todoric

Reproducible builds folks

Improving the process for testing build reproducibility

Hi! I'm Ceridwen. I'm going to be one of the Outreachy interns working on Reproducible Builds for the summer of 2016. My project is to create a tool, tentatively named reprotest, to make the process of verifying that a build is reproducible easier.

The current tools and the Reproducible Builds site have limits on what they can test, and they're not very user friendly. (For instance, I ended up needing to edit the rebuild.sh script to run it on my system.) Reprotest will automate some of the busywork involved and make it easier for maintainers to test reproducibility without detailed knowledge of the process involved. A session during the Athens meeting outlines some of the functionality and command-line and configuration file API goals for reprotest. I also intend to use some ideas, and command-line and config processing boilerplate, from autopkgtest. Reprotest, like autopkgtest, should be able to interface with more build environments, such as schroot and qemu. Both autopkgtest and diffoscope, the program that the Reproducible Builds project uses to check binaries for differences, are written in Python, and as Python is the scripting language I'm most familiar with, I will be writing reprotest in Python too.

One of my major goals is to get a usable prototype released in the first three to four weeks. At that point, I want to try to solicit feedback (and any contributions anyone wants to make!). One experience I've had in open source software is that connecting people with software they might want to use is often the hardest part of a project. I've reimplemented existing functionality myself because I simply didn't know that someone else had already written something equivalent, and seen many other people do the same. Once I have the skeleton fleshed out, I'm going to be trying to find and reach out to any other communities, outside the Debian Reproducible Builds project itself, who might find reprotest useful.

20 May, 2016 03:20AM

May 19, 2016

hackergotchi for Matthew Garrett

Matthew Garrett

Your project's RCS history affects ease of contribution (or: don't squash PRs)

Github recently introduced the option to squash commits on merge, and even before then several projects requested that contributors squash their commits after review but before merge. This is a terrible idea that makes it more difficult for people to contribute to projects.

I'm spending today working on reworking some code to integrate with a new feature that was just integrated into Kubernetes. The PR in question was absolutely fine, but just before it was merged the entire commit history was squashed down to a single commit at the request of the reviewer. This single commit contains type declarations, the functionality itself, the integration of that functionality into the scheduler, the client code and a large pile of autogenerated code.

I've got some familiarity with Kubernetes, but even then this commit is difficult for me to read. It doesn't tell a story. I can't see its growth. Looking at a single hunk of this diff doesn't tell me whether it's infrastructural or part of the integration. Given time I can (and have) figured it out, but it's an unnecessary waste of effort that could have gone towards something else. For someone who's less used to working on large projects, it'd be even worse. I'm paid to deal with this. For someone who isn't, the probability that they'll give up and do something else entirely is even greater.

I don't want to pick on Kubernetes here - the fact that this Github feature exists makes it clear that a lot of people feel that this kind of merge is a good idea. And there are certainly cases where squashing commits makes sense. Commits that add broken code and which are immediately followed by a series of "Make this work" commits also impair readability and distract from the narrative that your RCS history should present, and Github present this feature as a way to get rid of them. But that ends up being a false dichotomy. A history that looks like "Commit", "Revert Commit", "Revert Revert Commit", "Fix broken revert", "Revert fix broken revert" is a bad history, as is a history that looks like "Add 20,000 line feature A", "Add 20,000 line feature B".

When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. And never, ever, put autogenerated code in the same commit as an actual functional change.

People can't contribute to your project unless they can understand your code. Writing clear, well commented code is a big part of that. But so is showing the evolution of your features in an understandable way. Make sure your RCS history shows that, otherwise people will go and find another project that doesn't make them feel frustrated.

(Edit to add: Sarah Sharp wrote on the same topic a couple of years ago)

comment count unavailable comments

19 May, 2016 11:52PM

Antoine Beaupré

My free software activities, May 2016

Debian Long Term Support (LTS)

This is my 6th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, for which I had requested 20 hours of work.

Xen work

I spent the largest amount of time working on the Xen packages. We had to re-roll the patches because it turned out we originally just imported the package from Ubuntu as-is. This was a mistake because that package forked off the Debian packaging a while ago and included regressions in the packaging itself, not just security fixes.

So I went ahead and rerolled the whole patchset and tested it on Koumbit's test server. Brian May then completed the uploaded, which included about 40 new patches, mostly from Ubuntu.

Frontdesk duties

Next up was the frontdesk duties I had taken this week. This was mostly uneventful, although I had forgotten how to do some of the work and thus ended up doing extensive work on the contributor's documentation. This is especially important since new contributors joined the team! I also did a lot of Debian documentation work in my non-sponsored work below.

The triage work involved chasing around missing DLAs, triaging away OpenJDK-6 (for which, let me remind you, security support has ended in LTS), raised the question of Mediawiki maintenance.

Other LTS work

I also did a bunch of smaller stuff. Of importance, I can note that I uploaded two advisories that were pending from April: NSS and phpMyAdmin. I also reviewed the patches for the ICU update, since I built the one for squeeze (but didn't have time to upload before squeeze hit end-of-life).

I have tried to contribute to the NTP security support but that was way too confusing to me, and I have left it to the package maintainer which seemed to be on top of things, even if things mean complete chaos and confusion in the world of NTP. I somehow thought that situation had improved with the recent investments in ntpsec and ntimed, but unfortunately Debian has not switched to the ntpsec codebase, so it seems that the NTP efforts have diverged in three different projects instead of closing into a single, better codebase.

Future LTS work

This is likely to be my last month of work on LTS until September. I will try to contribute a few hours in June, but July and August will be very busy for me outside of Debian, so it's unlikely that I contribute much to the project during the summer. My backlog included those packages which might be of interest to other LTS contributors:

  • libxml2: no upstream fix, but needs fixing!
  • tiff{,3}: same mess
  • libgd2: maintainer contacted
  • samba regression: mailed bug #821811 to try to revive the effort
  • policykit-1: to be investigated
  • p7zip: same

Other free software work

Debian documentation

I wrote an detailed short guide to Debian package development, something I felt was missing from the existing corpus, which seems to be too focus in covering all alternatives. My guide is opinionated: I believe there is a right and wrong way of doing things, or at least, there are best practices, especially when just patching packages. I ended up retroactively publishing that as a blog post - now I can simply tag an item with blog and it shows up in the blog.

(Of course, because of a mis-configuration on my side, I have suffered from long delays publishing to Debian planet, so all the posts dates are off in the Planet RSS feed. This will hopefully be resolved around the time this post is published, but this allowed me to get more familiar with the Planet Venus software, as detailed in that other article.)

Apart from the guide, I have also done extensive research to collate information that allowed me to create workflow graphs of the various Debian repositories, which I have published in the Debian Release section of the Debian wiki. Here is the graph:

It helps me understand how packages flow between different suites and who uploads what where. This emerged after I realized I didn't really understand how "proposed updates" worked. Since we are looking at implementing a similar process for the security queue, I figured it was useful to show what changes would happen, graphically.

I have also published a graph that describes the relations between different software that make up the Debian archive. The idea behind this is also to provide an overview of what happens when you upload a package in the Debian archive, but it is more aimed at Debian developers trying to figure out why things are not working as expected.

The graphs were done with Graphviz, which allowed me to link to various components in the graph easily, which is neat. I also prefered Graphviz over Dia or other tools because it is easier to version and I don't have to bother (too much) about the layout and tweaking the looks. The downside is, of course, that when Graphviz makes the wrong decision, it's actually pretty hard to make it do the right thing, but there are various workarounds that I have found that made the graphs look pretty good.

The source is of course available in git but I feel all this documentation (including the guide) should go in a more official document somewhere. I couldn't quite figure out where. Advice on this would be of course welcome.

Ikiwiki

I have made yet another plugin for Ikiwiki, called irker, which enables wikis to send notifications to IRC channels, thanks to the simple irker bot. I had trouble with Irker in the past, since it was not quite reliable: it would disappear from channels and not return when we'd send it a notification. Unfortunately, the alternative, the KGB bot is much heavier: each repository needs a server-side, centralized configuration to operate properly.

Irker's design is simpler and more adapted to a simple plugin like this. Let's hope it will work reliably enough for my needs.

I have also suggested improvements to the footnotes styles, since they looked like hell in my Debian guide. It turns out this was an issue with the multimarkdown plugin that doesn't use proper semantic markup to identify footnotes. The proper fix is to enable footnotes in the default Discount plugin, which will require another, separate patch.

Finally, I have done some improvements (I hope!) on the layout of this theme. I made the top header much lighter and transparent to work around an issue where followed anchors would be hidden under the top header. I have also removed the top menu made out of the sidebar plugin because it was cluttering the display too much. Those links are all on the frontpage anyways and I suspect people were not using them so much.

The code is, as before, available in this git repository although you may want to start from the new ikistrap theme that is based on Bootstrap 4 and that may eventually be merged in ikiwiki directly.

DNS diagnostics

Through this interesting overview of various *ping tools, I got found out about the dnsdiag tool which currently allows users to do DNS traces, tampering detection and ping over DNS. In the hope of packaging it into Debian, I have requested clarifications regarding a modification to the DNSpython library the tool uses.

But I went even further and boldly opened a discussion about replacing DNSstuff, the venerable DNS diagnostic tools that is now commercial. It is somewhat surprising that there is no software that has even been publicly released that does those sanity checks for DNS, given how old DNS is.

Incidentally, I have also requested smtpping to be packaged in Debian as well but httping is already packaged.

Link checking

In the process of writing this article, I suddenly remembered that I constantly make mistakes in the various links I post on my site. So I started looking at a link checker, another tool that should be well established but that, surprisingly, is not quite there yet.

I have found this neat software written in Python called LinkChecker. Unfortunately, it is basically broken in Debian, so I had to do a non-maintainer upload to fix that old bug. I managed to force myself to not take over maintainership of this orphaned package but I may end up doing just that if no one steps up the next time I find issues in the package.

One of the problems I had checking links in my blog is that I constantly refer to sites that are hostile to bots, like the Debian bugtracker and MoinMoin wikis. So I published a patch that adds a --no-robots flag to be able to crawl those sites effectively.

I know there is the W3C tool but it's written in Perl, and there's probably zero chance for me to convince those guys to bypass robots exclusion rules, so I am sticking to Linkchecker.

Other Debian packaging work

At my request, Drush has finally been removed from Debian. Hopefully someone else will pick up that work, but since it basically needs to be redone from scratch, there was no sense in keeping it in the next release of Debian. Similarly, Semanticscuttle was removed from Debian as well.

I have uploaded new versions of tuptime, sopel and smokeping. I have also file a Request For Help for Smokeping. I am happy to report there was a quick response and people will be stepping up to help with the maintenance of that venerable monitoring software.

Background radiation

Finally, here's the generic background noise of me running around like a chicken with his head cut off:

Finally, I should mention that I will be less active in the coming months, as I will be heading outside as the summer finally came! I somewhat feel uncomfortable documenting publicly my summer here, as I am more protective of my privacy than I was before on this blog. But we'll see how it goes, maybe you'll hear non-technical articles here again soon!

19 May, 2016 10:49PM

hackergotchi for Steve Kemp

Steve Kemp

Accidental data-store .. is go!

A couple of days ago I wrote::

The code is perl-based, because Perl is good, and available here on github:

..

TODO: Rewrite the thing in #golang to be cool.

I might not be cool, but I did indeed rewrite it in golang. It was quite simple, and a simple benchmark of uploading two million files, balanced across 4 nodes worked perfectly.

https://github.com/skx/sos/

19 May, 2016 06:38PM

Valerie Young

Summer of Reproducible Builds

 

Hello friend, family, fellow Outreachy participants, and the Debian community!

This blog's primary purpose will be to track the progress of the Outreachy project in which I'm participating this summer 🙂  This post is to introduce myself and my project (working on the Debian reproducible builds project).

What is Outreachy? You might not know! Let me empower you: Outreachy is an organization connecting woman and minorities to mentors in the free (as in freedom) software community, /and/ funding for three months to work with the mentors and contribute to a free software project.  If you are a woman or minority human that likes free software, or if you know anyone in this situation, please tell them about Outreachy 🙂 Or put them in touch with me, I'd happily tell them more.

So who am I?

My name is Valerie Young. I live in the Boston Metropolitan Area (any other outreachy participants here?) and hella love free software. 

Some bullet pointed Val facts in rough reverse chronological order:
- I run Debian but only began contributing during the Outreachy application process
- If you went to DebConf2015, you might have seen me dye nine people's hair blue, blond or Debian swirl.
- If you stop through Boston I could be easily convinced to dye your hair.
- I worked on electronic medical records web application for the last two years (lotsa Javascriptin' and Perlin' at athenahealth)
- Before that I taught a programming summer program at University of Moratuwain Sri Lanka.
- Before that I got a degrees in physics and computer science at Boston University.
- At BU I helped start a hackerspace where my interest in technology, free software, hacker culture, anarchy, the internet all began.
- I grew up in the very fine San Francisco Bay Area.

What will I be working on?

Reproducible builds!

In the near future I'll write a “What is reproducible builds? Why is it so hot right now?” post.  For now, from a high (and not technical) level, reproducible builds is a broad effort to verify that the computer executable binary programs you run on your computer come from the human readable source code they claim to. It is not presently /impossible/ to do this verification, but it's not easy, and there are a lot of nuanced computer quirks that make it difficult for the most experienced programmer and straight-up impossible for a user with no technical expertise. And without this ability to verify -- the state we are in now -- any executable piece of software could be hiding secret code. 

The first step towards the goal of verifiability is to make reproducibility a essential part of software development. Reproducible builds means this: when you compile a program from the source code, it should always be identical, bit by bit. If the program is always identical, you can compare your version of the software to any trusted programmer with very little effort. If it is identical, you can trust it -- if it's not, you have reason to worry.

The Debian project is undergoing an effort to make the entire Debian operating system verifiable reproducible (hurray!). My outreachy-funded summer contribution involves the improving and updating tests.reproducible-builds.org – a site that presently presently surfaces the results of reproducibility testing of several free software projects (including Debian, Fedora, coreboot, OpenWrt, NetBSD, FreeBSD and ArchLinux). However, the design of test.r-b.org is a bit confusing, making it difficult for a user to find how to check on the reproducibility of a given package for one of the aforementioned projects, or understand the reasons for failure. Additional, the backend test results of Debian are outgrowing the original SQLite database, and many projects do not log the results of package testing at all. I hope, by the end of the summer, we'll have a more beefed-out and pretty site as well as better organized backend data 🙂

This summer there will be 3 other Outreachy participants working on the Debian reproducible builds project! Check out their blogs/projects:
Scarlett
Satyam
Ceridwen

Thanks to our Debian mentors -- Lunar, Holger Levsen, and Mattia Rizzolo -- for taking us on 🙂 

 

19 May, 2016 06:02PM by spectranaut

hackergotchi for Michal Čihař

Michal Čihař

wlc 0.3

wlc 0.3, a command line utility for Weblate, has been just released. This is probably first release which is worth using so it's probably also worth of bigger announcement.

It is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers).

How to use it? First you will probably want to store the credentials, so that your requests are authenticated (you can do unauthenticated requests as well, but obviously only read only and on public objects), so lets create ~/.config/weblate:

[weblate]
url = https://hosted.weblate.org/api/

[keys]
https://hosted.weblate.org/api/ = APIKEY

Now you can do basic commands:

$ wlc show weblate/master/cs
...
last_author: Michal Čihař
last_change: 2016-05-13T15:59:25
revision: 62f038bb0bfe360494fb8dee30fd9d34133a8663
share_url: https://hosted.weblate.org/engage/weblate/cs/
total: 1361
total_words: 6144
translate_url: https://hosted.weblate.org/translate/weblate/master/cs/
translated: 1361
translated_percent: 100.0
translated_words: 6144
url: https://hosted.weblate.org/api/translations/weblate/master/cs/
web_url: https://hosted.weblate.org/projects/weblate/master/cs/

You can find more examples in wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

19 May, 2016 04:00PM

Petter Reinholdtsen

I want the courts to be involved before the police can hijack a news site DNS domain (#domstolkontroll)

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

19 May, 2016 12:00PM

May 18, 2016

Stig Sandbeck Mathisen

Puppet 4 uploaded to Debian experimental

I’ve uploaded puppet 4.4.2-1 to Debian experimental.

Please test with caution, and expect sharp corners. This is a new major version of Puppet in Debian, with many new features and potentially breaking changes, as well as a big rewrite of the .deb packaging. Bug reports for src:puppet are very welcome.

As previously described in #798636, the new package names are:

  • puppet (all the software)

  • puppet-agent (package containing just the init script and systemd unit for the puppet agent)

  • puppet-master (init script and systemd unit for starting a single master)

  • puppet-master-passenger (This package depends on apache2 and libapache2-mod-passenger, and configures a puppet master scaled for more than a handful of puppet agents)

Lots of hugs to the authors, keepers and maintainers of autopkgtest, debci, piuparts and ruby-serverspec for their software. They helped me figure out when I had reached “good enough for experimental”.

Some notes:

  • To use exported resources with puppet 4, you need a puppetdb installation and a relevant puppetdb-terminus package on your puppet master. This is not available in Debian, but is available from Puppet’s repositories.

  • Syntax highlighting for Emacs and Vim are no longer built from the puppet package. Standalone packages will be made.

  • The packaged puppet modules need an overhaul of their dependencies to install alongside this version of puppet. Testing would probably also be great to see if they actually work.

I sincerely hope someone finds this useful. :)

18 May, 2016 10:00PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

18 May, 2016 09:25PM

Andy Simpkins

OpenTAC sprint, Cambridge

Last weekend saw a small group get togeather in Cambridge to hack on the OpenTAC.  OpenTAC is an OpenHardware OpenSoftware test platform, designed specificly to aid automated testing and continious intergration.

Aimed at small / mobile / embedded targets OpenTAC v1 provides all of the  support infrastructure to drive up to 8 DUTs (Device Under Test) to your test or CI system.
Each of the 8 EUT ports provides:

  • A serial port (either RS232 levels on an DB9 socket, or 3V3 TTL on a molex kk plug)
  • USB Power (up-to 2A with a software defined fuse, and alarm limits)
  • USB data interconnect
  • Ethernet

All ports on the EUT interface are relay issolated, this means that cables to your EUT can be ‘unplugged’ under software control (we are aware of several SoC development boards that latch up if there is a serial port connected before power is applied).

Additionly there are 8 GPIO lines that can be used as switch controls to any EUT (perhaps to put a specific EUT into a programming mode, reboot it or even start it)

 

Anyway, back to the hacking weekend. ..

 

Joining Steve McIntyre and myself were Mark Brown, and Michael Grzeschik  (sorry Michael, I couldn’t find a homepage).  Mark traveled down from Scotland whilst Michael flew in from Germany for the weekend.  Gents we greatly apprecate you taking the time and expence to join us this weekend.  I should also thank my employer Toby Churchill Ltd. for allowing us to use the office to host the event.

A lot of work got done, and I beleive we have now fully tested and debugged the hardware.  We have also made great progress with the device tree and dvice drivers for the platform.  Mark got the EUT power system working as proof of concept, and has taken an OpenTAC board back with him to turn this into suitable drivers and hopfully push them up stream.  Meanwhile Michael spent his time working on the system portion of the device tree; OpenTAC’s internal power sequancing, thermal managment subsystem, and USB hub control.  Steve  got to grips with the USB serial converters (including how to read and program their internal non-volatile settings).  Finally I was able to explain hardware sequancing to everyone, and to modify boards to overcome some of my design mistakes (the biggest was by far the missing sence resistors for the EUT power managment)

 

 

18 May, 2016 09:00PM by andy

hackergotchi for Steve Kemp

Steve Kemp

Accidental data-store ..

A few months back I was looking over a lot of different object-storage systems, giving them mini-reviews, and trying them out in turn.

While many were overly complex, some were simple. Simplicity is always appealing, providing it works.

My review of camlistore was generally positive, because I like the design. Unfortunately it also highlighted a lack of documentation about how to use it to scale, replicate, and rebalance.

How hard could it be to write something similar, but also paying attention to keep it as simple as possible? Well perhaps it was too easy.

Blob-Storage

First of all we write a blob-storage system. We allow three operations to be carried out:

  • Retrieve a chunk of data, given an ID.
  • Store the given chunk of data, with the specified ID.
  • Return a list of all known IDs.

 

API Server

We write a second server that consumers actually use, though it is implemented in terms of the blob-storage server listed previously.

The public API is trivial:

  • Upload a new file, returning the ID which it was stored under.
  • Retrieve a previous upload, by ID.

 

Replication Support

The previous two services are sufficient to write an object storage system, but they don't necessarily provide replication. You could add immediate replication; an upload of a file could involve writing that data to N blob-servers, but in a perfect world servers don't crash, so why not replicate in the background? You save time if you only save uploaded-content to one blob-server.

Replication can be implemented purely in terms of the blob-servers:

  • For each blob server, get the list of objects stored on it.
  • Look for that object on each of the other servers. If it is found on N of them we're good.
  • If there are fewer copies than we like, then download the data, and upload to another server.
  • Repeat until each object is stored on sufficient number of blob-servers.

 

My code is reliable, the implementation is almost painfully simple, and the only difference in my design is that rather than having an API-server which allows both "uploads" and "downloads" I split it into two - that means you can leave your "download" server open to the world, so that it can be useful, and your upload-server can be firewalled to only allow a few hosts to access it.

The code is perl-based, because Perl is good, and available here on github:

TODO: Rewrite the thing in #golang to be cool.

18 May, 2016 06:49PM

Bits from Debian

Imagination accelerates Debian development for 64-bit MIPS CPUs

Imagination Technologies recently donated several high-performance SDNA-7130 appliances to the Debian Project for the development and maintenance of the MIPS ports.

The SDNA-7130 (Software Defined Network Appliance) platforms are developed by Rhino Labs, a leading provider of high-performance data security, networking, and data infrastructure solutions.

With these new devices, the Debian project will have access to a wide range of 32- and 64-bit MIPS-based platforms.

Debian MIPS ports are also possible thanks to donations from the aql hosting service provider, the Eaton remote controlled ePDU, and many other individual members of the Debian community.

The Debian project would like to thank Imagination, Rhino Labs and aql for this coordinated donation.

More details about GNU/Linux for MIPS CPUs can be found in the related press release at Imagination and their community site about MIPS.

18 May, 2016 07:30AM by Laura Arjona Reina

May 17, 2016

Reproducible builds folks

Reproducible builds: week 55 in Stretch cycle

What happened in the Reproducible Builds effort between May 8th and May 14th 2016:

Documentation updates

Toolchain fixes

  • dpkg 1.18.7 has been uploaded to unstable, after which Mattia Rizzolo took care of rebasing our patched version.
  • gcc-5 and gcc-6 migrated to testing with the patch to honour SOURCE_DATE_EPOCH
  • Ximin Luo started an upstream discussion with the Ghostscript developers.
  • Norbert Preining has uploaded a new version of texlive-bin with these changes relevant to us:
    • imported Upstream version 2016.20160512.41045 support for suppressing timestamps (SOURCE_DATE_EPOCH) (Closes: #792202)
    • add support for SOURCE_DATE_EPOCH also to luatex
  • cdbs 0.4.131 has been uploaded to unstable by Jonas Smedegaard, fixing these issues relevant to us:
    • #794241: export SOURCE_DATE_EPOCH. Original patch by akira
    • #764478: call dh_strip_nondeterminism if available. Original patch by Holger Levsen
  • libxslt 1.1.28-3 has been uploaded to unstable by Mattia Rizzolo, fixing the following toolchain issues:
    • #823857: backport patch from upstream to provide stable IDs in the genrated documents.
    • #791815: Honour SOURCE_DATE_EPOCH when embedding timestamps in docs. Patch by Eduard Sanou.

Packages fixed

The following 28 packages have become newly reproducible due to changes in their build dependencies: actor-framework ask asterisk-prompt-fr-armelle asterisk-prompt-fr-proformatique coccinelle cwebx d-itg device-tree-compiler flann fortunes-es idlastro jabref konclude latexdiff libint minlog modplugtools mummer mwrap mxallowd mysql-mmm ocaml-atd ocamlviz postbooks pycorrfit pyscanfcs python-pcs weka

The following 9 packages had older versions which were reproducible, and their latest versions are now reproducible again due to changes in their build dependencies: csync2 dune-common dune-localfunctions libcommons-jxpath-java libcommons-logging-java libstax-java libyanfs-java python-daemon yacas

The following packages have become newly reproducible after being fixed:

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

  • klibc/2.0.4-9 by Ben Hutchings.

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #787424 against emacs24 by Alexis Bienvenüe: order hashes when generating .el files
  • #823764 against sen by Daniel Shahaf: render the build timestamp in a consistent timezone
  • #823797 against openclonk by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH
  • #823961 against herbstluftwm by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824049 against emacs24 by Alexis Bienvenüe: make start value of gensym-counter reproducible
  • #824050 against emacs24 by Alexis Bienvenüe: make autoloads files reproducible
  • #824182 against codeblocks by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824263 against cmake by Reiner Herrmann: sort file lists from file(GLOB ...)

Package reviews

344 reviews have been added, 125 have been updated and 20 have been removed in this week.

14 FTBFS bugs have been reported by Chris Lamb.

tests.reproducible-builds.org

Misc.

Dan Kegel sent a mail to report about his experiments with a reproducible dpkg PPA for Ubuntu. According to him sudo add-apt-repository ppa:dank/dpkg && sudo apt-get update && sudo apt-get install dpkg should be enough to get reproducible builds on Ubuntu 16.04.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

17 May, 2016 11:09PM

hackergotchi for Mehdi Dogguy

Mehdi Dogguy

Newmaint — Call for help

The process leading to acceptation of new Debian Maintainers is mainly administrative today and is handled by the Newmaint team. In order to simplify this process further, the team wants to integrate their workflow into nm.debian.org's interface so that prospective maintainers can send their application online and the Newmaint team review it from within the website.

We need your help to implement the missing pieces into nm.debian.org. It is written in Python and using Django. If you have some experience with that, you should definitely join the newmaint-site mailing list and ask for the details. Enrico or someone else in the list will do their best to share their vision and explain the needed work in order to get this properly implemented!

It doesn't matter if you're already a Debian Developer to be able to contribute to this project. Anyone can step up and help!

17 May, 2016 09:49PM by Mehdi (noreply@blogger.com)

hackergotchi for Sean Whitton

Sean Whitton

seoulviasfo

I spent last night in San Francisco on my way from Tucson to Seoul. This morning as I headed to the airport, I caught the end of a shouted conversation between a down-and-out and a couple of middle school-aged girls, who ran away back to the Asian Art museum as the conversation ended. A security guard told the man that he needed him to go away. The wealth divide so visible here just isn’t something you really see around Tucson.

I’m working on a new module for Propellor that’s complicated enough that I need to think carefully about the Haskell in order to write produce a flexible and maintainable module. I’ve only been doing an hour or so of work on it per day, but the past few days I wake up each day with an idea for restructuring yesterday’s code. These ideas aren’t anything new to me: I think I’m just dredging up the understanding of Haskell I developed last year when I was studying it more actively. Hopefully this summer I can learn some new things about Haskell.

Riding on the “Bay Area Rapid Transit” (BART) feels like stepping back in time to the years of Microsoft’s ascendency, before we had a tech world dominated by Google and Facebook: the platform announcements are in a computerised voice that sounds like it was developed in the nineties. They’ll eventually replace the old trains—apparently some new ones are coming in 2017—so I feel privileged to have been able to ride the older ones. I feel the same about the Tube in London.

I really appreciate old but supremely reliable and effective public transport. It reminds me of the Debian toolchain: a bit creaky, but maintained over a sufficiently long period that it serves everyone a lot better than newer offerings, which tend to be produced with ulterior corporate motives.

17 May, 2016 07:54PM

Mark Brown

OpenTAC sprint

This weekend Toby Churchill kindly hosted a hacking weekend for OpenTAC – myself, Michael Grzeschik, Steve McIntyre and Andy Simpkins got together to bring up the remaining bits of the hardware on the current board revision and get some of the low level tooling like production flashing for the FTDI serial ports on the board up and running. It was a very productive weekend, we verified that everything was working with only few small mods needed for the board . Personally the main thing I worked on was getting most of an initial driver for the EMC1701 written. That was the one component without Linux support and allowed us to verify that the power switching and measurement for the systems under test was working well.

There’s still at least one more board revision and quite a bit of software work to do (I’m hoping to get the EMC1701 upstream for v4.8) but it was great to finally see all the physical components of the system working well and see it managing a system under test, this board revision should support all the software development that’s going to be needed for the final board.

Thanks to all who attended, Pengutronix for sponsoring Michael’s attendance and Toby Churchill for hosting!

IMG_2194
IMG_20160515_192336628

17 May, 2016 03:11PM by broonie

Mike Gabriel

NXv3 Rebase: Build nxagent against X.org 7.0

As already hinted in my previous blog post, here comes a short howto that explains how to test-build nxagent (v3) against a modularized X.org 7.0 source tree.

WARNING: Please note that mixing NX code and X.org code partially turns the original X.org code base into GPL-2 code. We are aware of this situation and work on moving all NXv3 related GPL-2 code into the nxagent DDX code (xserver-xorg/hw/nxagent) or--if possible--dropping it completely. The result shall be a range of patches against X.org (licensable under the same license as the respective X.org files) and a GPL-2 licensed DDX (i.e. nxagent).

How to build this project

For the Brave and Playful

$ git clone https://git.arctica-project.org/nx-X11-rebase/build.git .
$ bash populate.sh sources.lst
$ ./buildit.sh

You can find the built tree in the _install/ sub-directory.

Please note that cloning Git repositories over the https protocol can be considerably slow. If you want to speed things up, consider signing up with our GitLab server.

For Developers...

... who have registered with our GitLab server.

$ git clone git@git.arctica-project.org:nx-X11-rebase/build.git .
$ bash populate.sh sources-devs.lst
$ ./buildit.sh

You will find the built tree in the _install/ sub-directory.

The related git repositories are in the repos/ sub-directory. All repos modified for NX have been cloned from the Arctica Project's GitLab server via SSH. Thus, you as a developer can commit changes on those repos and push back your changes to the GitLab server.

Required tools for building

Debian/Ubuntu and alike

  • build-essential
  • automake
  • gawk
  • git
  • pkg-config
  • libtool
  • libz-dev
  • libjpeg-dev
  • libpng-dev

In a one-liner command:

$ sudo apt-get install build-essential automake gawk git pkg-config libtool libz-dev libjpeg-dev libpng-dev

Fedora

If someone tries this out in a clean Fedora chroot environment, please let us know about build dependent packages.

openSUSE

If someone tries this out in a clean openSUSE chroot environment, please let us know about build dependent packages.

Testing the built nxagent and nxproxy

The tests/ subdir contains some scripts which can be used to test the compile results.

  • run-nxagent runs an nxagent and starts an nxproxy connection to it (do this as normal non-root user):
    $ tests/run-nxagent
    $ export DISPLAY=:9
    # launch e.g. MATE desktop environment on Debian, adapt session type and Xsession startup to your system / distribution
    $ STARTUP=mate-session /etc/X11/Xsession
    
  • run-nxproxy2nxproxy-test connects to nxproxys using the nx compression protocol:
    $ tests/run-nxproxy2nxproxy-test
    $ export DISPLAY=:8
    # launch e.g. xterm and launch other apps from within that xterm process
    $ xterm &
    
  • more to come...

Notes on required X.org changes (NX_MODIFICATIONS)

For this build workflow to work, we (i.e. mostly Ulrich Sibiller) had to work several NoMachine patches into original X.org 7.0 code. Here is a list of modified X11 components with URLs pointing to the branch containing those changes:

xkbdata                            xorg/data/xkbdata                       rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xkbdata.git
libfontenc                         xorg/lib/libfontenc                     rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/libfontenc.git
libSM                              xorg/lib/libSM                          rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libSM.git
libX11                             xorg/lib/libX11                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libX11.git
libXau                             xorg/lib/libXau                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libXau.git
libXfont                           xorg/lib/libXfont                       rebasenx  1.3.1     https://git.arctica-project.org/nx-X11-rebase/libXfont.git
libXrender                         xorg/lib/libXrender                     rebasenx  0.9.0.2   https://git.arctica-project.org/nx-X11-rebase/libXrender.git
xtrans                             xorg/lib/libxtrans                      rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libxtrans.git
kbproto                            xorg/proto/kbproto                      rebasenx  1.0.2     https://git.arctica-project.org/nx-X11-rebase/kbproto.git
xproto                             xorg/proto/xproto                       rebasenx  7.0.4     https://git.arctica-project.org/nx-X11-rebase/xproto.git
xorg-server                        xorg/xserver                            rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xserver.git
mesa                               mesa/mesa                               rebasenx  6.4.1     https://git.arctica-project.org/nx-X11-rebase/mesa.git

Credits

Nearly all of this has been achieved by Ulrich Sibiller. Thanks a lot for giving your time and energy to that. As the rebasing of NXv3 is currently a funded project supported by the Qindel Group, we are currently negotiating ways of monetarily appreciating Ulrich's intensive work on this. Thanks a lot, once more!!!

Feedback

If anyone of you feels like trying out the test build as described above, please consider signing up with the Arctica Project's GitLab server and reporting your issues there directly (against the repository nx-X11-rebase/build). Alternatively, feel free to contact us on IRC (Freenode): #arctica or subscribe to our developers' mailing list. Thank you.

light+love
Mike Gabriel

17 May, 2016 02:27PM by sunweaver

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, April 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 116.75 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 16h.
  • Ben Hutchings did 12.25 hours (out of 15 hours allocated + 5.50 extra hours remaining, he returned the remaining 8.25h to the pool).
  • Brian May did 10 hours.
  • Chris Lamb did nothing (instead of the 16 hours he was allocated, his hours have been redispatched to other contributors over May).
  • Guido Günther did 2 hours (out of 8 hours allocated + 3.25 remaining hours, leaving 9.25 extra hours for May).
  • Markus Koschany did 16 hours.
  • Santiago Ruano Rincón did 7.50 hours (out of 12h allocated + 3.50 remaining, thus keeping 8 extra hours for May).
  • Scott Kitterman posted a report for 6 hours made in March but did nothing in April. His 18 remaining hours have been returned to the pool. He decided to stop doing LTS work for now.
  • Thorsten Alteholz did 15.75 hours.

Many contributors did not use all their allocated hours. This is partly explained by the fact that in April Wheezy was still under the responsibility of the security team and they were not able to drive updates from start to finish.

In any case, this means that they have more hours available over May and since the LTS period started, they should hopefully be able to make a good dent in the backlog of security updates.

Evolution of the situation

The number of sponsored hours reached a new record with 132 hours per month, thanks to two new gold sponsors (Babiel GmbH and Plat’Home). Plat’Home’s sponsorship was aimed to help us maintain Debian 7 Wheezy on armel and armhf (on top of already supported amd64 and i386). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file lists 44 packages awaiting an update.

This is a bit more than the 15-20 open entries that we used to have at the end of the Debian 6 LTS period.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 May, 2016 01:57PM by Raphaël Hertzog

May 16, 2016

Bits from Debian

New Debian Developers and Maintainers (March and April 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Sven Bartscher (kritzefitz)
  • Harlan Lieberman-Berg (hlieberman)

Congratulations!

16 May, 2016 10:10PM by Ana Guerrero Lopez

hackergotchi for Clint Adams

Clint Adams

Canadian Automobile Association

bind9 in jessie does not support CAA records

16 May, 2016 08:18PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

stretch on ODROID XU4

I recently acquired an ODROID XU4. Despite being 32-bit, it's currently at the upper end of cheap SoC-based devboards; it's based on Exynos 5422 (which sits in Samsung Galaxy S5), which means 2 GHz quadcore Cortex-A15 (plus four slower Cortex-A7, in a big.LITTLE configuration), 2 GB RAM, USB 3.0, gigabit Ethernet, a Mali-T628 GPU and eMMC/SD storage. (My one gripe about the hardware is that you can't put on the case lid while still getting access to the serial console.)

Now, since I didn't want it for HTPC or something similar (I wanted a server/router I could carry with me), I didn't care much about the included Ubuntu derivative with all sorts of Samsung modifications, so instead, I went on to see if I could run Debian on it. (Spoiler alert: You can't exactly just download debian-installer and run it.) It turns out there are lots of people who make Debian images, but they're still filled with custom stuff here and there.

In recent times, people have put down heroic efforts to make unified ARM kernels; servers et al can now enumerate hardware using ACPI, while SoCs (such as the XU4) have a “device tree” file (loaded by the bootloader) containing a functional description of what hardware exists and how it's hooked up. And lo and behold, the 4.5.0 “armmp” kernel from stretch boots and mostly works! Well… except for that there's no HDMI output. :-)

There are two goals I'd like to achieve by this exercise: First, it's usually much easier to upgrade things if they are close to mainline. (I wanted support for sch_fq, for instance, which isn't in 3.10, and the vendor kernel is 3.10.) Second, anything that doesn't work in Debian is suddenly exposed pretty harshly, and can be filed bugs for and fixed—which benefits not only XU4 users (if nothing else, because the custom distros have to carry less delta), but usually also other boards as most issues are of a somewhat more generic nature. Yet, the ideal seems to puzzle some of the more seasoned people in the ODROID user groups; I guess sometimes it's nice to come in as a naïve new user. :-)

So far, I've filed bugs or feature requests to the kernel (#823552, #824435), U-Boot (#824356), grub (#823955, #824399), and login (#824391)—and yes, that includes for the aforemented lack of HDMI output. Some of them are already fixed; with some luck, maybe the XU4 can be added next to the other Exynos5 board at the compatibility list for the armmp kernels at some point. :-)

You can get the image at http://storage.sesse.net/debian-xu4/. Be sure to read the README and the linked ODROID forum post.

16 May, 2016 03:58PM

Russ Allbery

Review: Gentleman Jole and the Red Queen

Review: Gentleman Jole and the Red Queen, by Lois McMaster Bujold

Series: Vorkosigan #15
Publisher: Baen
Copyright: 2015
Printing: February 2016
ISBN: 1-4767-8122-2
Format: Kindle
Pages: 352

This is very late in the Vorkosigan series, but it's also a return to a different protagonist and a change of gears to a very different type of story. Gentleman Jole and the Red Queen has Cordelia as a viewpoint character for, I believe, the first time since Barrayar, very early in the series. But you would still want to read the intermediate Miles books before this one given the nature of the story Bujold is telling here. It's a very character-centric, very quiet story that depends on the history of all the Vorkosigan characters and the connection the reader has built up with them. I think you have to be heavily invested in this series already to get that much out of this book.

The protagonist shift has a mildly irritating effect: I've read the whole series, but I was still a bit adrift at times because of how long it's been since I read the books focused on Cordelia. I only barely remember the events of Shards of Honor and Barrayar, which lay most of the foundations of this story. Bujold does have the characters retell them a bit, enough to get vaguely oriented, but I'm pretty sure I missed some subtle details that I wouldn't have if the entire series were fresh in memory. (Oh for the free time to re-read all of the series I'd like to re-read.)

Unlike recent entries in this series, Gentleman Jole and the Red Queen is not about politics, investigations, space (or ground) combat, war, or any of the other sources of drama that have shown up over the course series. It's not even about a wedding. The details (and sadly even the sub-genre) are all spoilers, both for this book and for the end of Cryoburn, so I can't go into many details. But I'm quite curious how the die-hard Baen fans would react to this book. It's a bit far afield from their interests.

Gentleman Jole is all about characters: about deciding what one wants to do with one's life, about families and how to navigate them, about boundaries and choices. Choices about what to communicate and what not to communicate, and, partly, about how to maintain sufficient boundaries against Miles to keep his manic energy from bulldozing into things that legitimately aren't any of his business. Since most of the rest of the series is about Miles poking into things that appear to not be his business and finding ways to fix things, it's an interesting shift. It also cast Cordelia in a new light for me: a combination of stability, self-assurance, and careful and thoughtful navigation around others' feelings. Not a lot happens in the traditional plot sense, so one's enjoyment of this book lives or dies on one's investment in the mundane life of the viewpoint characters. It worked for me.

There is also a substantial retcon or reveal about an aspect of Miles's family that hasn't previously been mentioned. (Which term you use depends on whether you think Bujold has had this in mind all along. My money is on reveal.) I suspect some will find this revelation jarring and difficult to believe, but it worked perfectly for me. It felt like exactly the sort of thing that would go unnoticed by the other characters, particularly Miles: something that falls neatly into his blind spots and assumptions, but reads much differently to Cordelia. In general, one of the joys of this book for me is seeing Miles a bit wrong-footed and maneuvered by someone who simply isn't willing to be pushed by him.

One of the questions the Vorkosigan series has been asking since the start is whether anyone can out-maneuver Miles. Ekaterin only arguably managed it, but Gentleman Jole makes it clear that Miles is no match for his mother on her home turf.

This is a quiet and slow book that doesn't feel much like the rest of the series, but it worked fairly well for me. It's not up in the ranks of my favorite books of this series, partly because the way it played out was largely predictable and I never quite warmed to Jole, but Cordelia is delightful and seeing Miles from an outside perspective is entertaining. An odd entry in the series, but still recommended.

Rating: 7 out of 10

16 May, 2016 03:59AM

May 15, 2016

Bits from Debian

What does it mean that ZFS is included in Debian?

Petter Reinholdtsen recently blogged about ZFS availability in Debian. Many people have worked hard on getting ZFS support available in Debian and we would like to thank everyone involved in getting to this point and explain what ZFS in Debian means.

The landing of ZFS in the Debian archive was blocked for years due to licensing problems. Finally, the inclusion of ZFS was announced slightly more than a year ago, on April 2015 by the DPL at the time, Lucas Nussbaum who wrote "We received legal advice from Software Freedom Law Center about the inclusion of libdvdcss and ZFS in Debian, which should unblock the situation in both cases and enable us to ship them in Debian soon.". In January this year, the following DPL, Neil McGovern blogged with a lot of more details about the legal situation behind this and summarized it as "TLDR: It’s going in contrib, as a source only dkms module."

ZFS is not available exactly in Debian, since Debian is only what's included in the "main" section archive. What people really meant here is that ZFS code is now in included in "contrib" and it's available for users using DKMS.

Many people also mixed this with Ubuntu now including ZFS. However, Debian and Ubuntu are not doing the same, Ubuntu is shipping directly pre-built kernel modules, something that is considered to be a GPL violation. As the Software Freedom Conservancy wrote "while licensed under an acceptable license for Debian's Free Software Guidelines, also has a default use that can cause licensing problems for downstream Debian users".

15 May, 2016 08:55PM by Ana Guerrero Lopez

Sven Hoexter

Failing with F5: ASM default ruleset vs curl

Not sure what to say on days when the default ruleset of a "web application firewall" denies access for curl, and the circumvention is as complicated as:

alias curl-vs-asm="curl -A 'Mozilla'"

It starts to feel like wasting my lifetime when I see something like that. Otherwise I like my job (that's without irony!).

Update: Turns out it's even worse. They specifically block curl. Even

curl -A 'A' https://wherever-asm-is-used.example

works.

15 May, 2016 11:16AM

hackergotchi for Norbert Preining

Norbert Preining

Foreigners in Japan are evil …

…at least what Tokyo Shinjuku ward belives. They have put out a very nice brochure about how to behave as a foreigner in Japan: English (local copy) and Japanese (local copy). Nothing in there is really bad, but the tendency is so clear that it makes me think – what on earth do you believe we are doing in this country?
foreigners

Now what is so strange on that? And if you have never lived in Japan you will probably not understand. But reading through this pamphlet I felt like a criminal from the first page on. If you don’t want to read through it, here a short summary:

  • The first four pages (1-4) deal with manner, accompanying penal warnings for misbehavior.
  • Pages 5-16 deal with criminal records, stating the amount of imprisonment and fines for strange delicti.
  • Pages 17-19 deal with residence card, again paired with criminal activity listings and fines.
  • Pages 20-23 deal with reporting obligations, again ….
  • And finally page 24 gives you phone numbers for accidents, fires, injury, and general information.

So if you count up, we have 23 pages of warnings, and 1 (as in *one*) page of practical information. Do I need to add more about how we foreigners are considered in Japan?

Just a few points about details:

  • In the part on manner, not talking on the phone in public transport is mentioned – I have to say, after many years here I am still waiting to see the first foreigner talking on the phone loudly, while Japanese regularly chat away at high volume.
  • Again in the manner section, don’t make noise in your flat – well, I lived 3 years in an apartment where the one below me enjoyed playing loud music in the car till late in the night, as well as moving furniture at 3am.
  • Bicycle riding – ohhhh, bicycle riding – those 80+ people meandering around the street, and the school kids driving 4 next to each other. But hey, we foreigners are required to do differently. Not that any police officer ever stopped a Japanese school kid for that …
  • I just realized that I was doing illegal things for long time – withdrawing money using someone else’s cash card! Damned, it was my wife’s, but still, too bad 🙁

I accept the good intention of the Shinjuku ward to bring forth a bit of warnings and guidance. But the way it was done – it speaks volumes about how we foreigners are treated – second class.

15 May, 2016 03:07AM by Norbert Preining

hackergotchi for Jonathan Dowland

Jonathan Dowland

Announcement

It has become a bit traditional within Debian to announce these things in a geeky manner, so for now

# ed -p: /etc/exim4/virtual/dow.land
:a
holly: :fail: reserved for future use
.
:wq
99

More soon!

15 May, 2016 02:11AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.5: Yet another one

The fifth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.5 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, and the 0.12.4 release in March --- making it the ninth release at the steady bi-montly release frequency. This release is one again more of a maintenance release addressing a number of small bugs, nuisances or documentation issues without adding any major new features.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 662 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by almost fifty packages from the last release in late March!

And as during the last few releases, we have first-time committers. we have new first-time contributors. Sergio Marques helped to enable compilation on Alpine Linux (with its smaller libc variant). Qin Wenfeng helped adapt for Windows builds under R 3.3.0 and the long-awaited new toolchain. Ben Goodrich fixed a (possibly ancient) Rcpp Modules bug he encountered when working with rstan. Other (recurrent) contributor Dan Dillon cleaned up an issue with Nullable and strings. Rcpp Core team members Kevin and JJ took care of small build nuisance on Windows, and I added in a new helper function, updated the skeleton generator and (finally) formally deprecated loadRcppModule() for which loadModule() has been preferred since around R 2.15 or so. More details and links are below.

Changes in Rcpp version 0.12.5 (2016-05-14)

  • Changes in Rcpp API:

    • The checks for different C library implementations now also check for Musl used by Alpine Linux (Sergio Marques in PR #449).

    • Rcpp::Nullable works better with Rcpp::String (Dan Dillon in PR #453).

  • Changes in Rcpp Attributes:

    • R 3.3.0 Windows with Rtools 3.3 is now supported (Qin Wenfeng in PR #451).

    • Correct handling of dependent file paths on Windows (use winslash = "/").

  • Changes in Rcpp Modules:

    • An apparent race condition in Module loading seen with R 3.3.0 was fixed (Ben Goodrich in #461 fixing #458).

    • The (older) loadRcppModules() is now deprecated in favour of loadModule() introduced around R 2.15.1 and Rcpp 0.9.11 (PR #470).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function was again updated in order to create a DESCRIPTION file which passes R CMD check without notes. warnings, or error under R-release and R-devel (PR #471).

    • A new function compilerCheck can test for minimal g++ versions (PR #474).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 May, 2016 01:54AM

May 14, 2016

Antoine Beaupré

Long delays posting Debian Planet Venus

For the last few months, it seems that my posts haven't been reaching the Planet Debian aggregator correctly. I timed the last two posts and they both arrived roughly 10 days late in the feed.

SNI issues

At first, I suspected I was a victim of the SNI bug in Planet Venus: since it is still running in Python 2.7 and uses httplib2 (as opposed to, say, Requests), it has trouble with sites running under SNI. In January, there were 9 blogs with that problem on Planet. When this was discussed elsewhere in February, there were now 18, and then 21 reported in March. With everyone enabling (like me) Let's Encrypt on their website, this number is bound to grow.

I was able to reproduce the Debian Planet setup locally to do further tests and ended up sending two (unrelated) patches to the Debian bug tracker against Planet Venus, the software running Debian planet. In my local tests, I found 22 hosts with SNI problems. I also posted some pointers on how the code could be ported over to the more modern Requests and Cachecontrol modules.

Expiry issues

However, some of those feeds were working fine on philp, the host I found was running as the Planet Master. Even more strange, my own website was working fine!

INFO:planet.runner:Feed https://anarc.at/tag/debian-planet/index.rss unchanged

Now that was strange: why was my feed fetched, but noted as unchanged? For that, I found that there was a FAQ question buried down in the PlanetDebian wikipage which explicitly said that Planet obeys Expires headers diligently and will not get new content again if the headers say they did. Skeptical, I looked my own headers and, ta-da! they were way off:

$ curl -v https://anarc.at/tag/debian-planet/index.rss 2>&1 | egrep  '< (Expires|Date)'
< Date: Sat, 14 May 2016 19:59:28 GMT
< Expires: Sat, 28 May 2016 19:59:28 GMT

So I lowered the expires timeout on my RSS feeds to 3 hours:

root@marcos:/etc/apache2# git diff
diff --git a/apache2/conf-available/expires.conf b/apache2/conf-available/expires.conf
index 214f3dd..a983738 100644
--- a/apache2/conf-available/expires.conf
+++ b/apache2/conf-available/expires.conf
@@ -3,8 +3,18 @@
   # Enable expirations.
   ExpiresActive On

-  # Cache all files for 2 weeks after access (A).
-  ExpiresDefault A1209600
+  # Cache all files 12 hours after access
+  ExpiresDefault "access plus 12 hours"
+
+  # RSS feeds should refresh more often
+  <FilesMatch \.(rss)$>
+    ExpiresDefault "modification plus 4 hours"
+  </FilesMatch> 
+
+  # images are *less* likely to change
+  <FilesMatch "\.(gif|jpg|png|js|css)$">
+    ExpiresDefault "access plus 1 month"
+  </FilesMatch>

   <FilesMatch \.(php|cgi)$>
     # Do not allow scripts to be cached unless they explicitly send cache

I also lowered the general cache expiry, except for images, Javascript and CSS.

Planet Venus maintenance

A small last word about all this: I'm surprised to see that Planet Debian is running a 6 year old software that hasn't seen a single official release yet, with local patches on top. It seems that Venus is well designed, I must give them that, but it's a little worrisome to see great software just rotting around like this.

A good "planet" site seems like a resource a lot of FLOSS communities would need: is there another "Planet-like" aggregator out there that is well maintained and more reliable? In Python, preferably.

PlanetPlanet, which Venus was forked from, is out of the question: it is even less maintained than the new fork, which itself seems to have died in 2011.

There is a discussion about the state of Venus on Github which reflects some of the concerns expressed here, as well as on the mailing list. The general consensus seems to be that everyone should switch over to Planet Pluto, which is written in Ruby.

I am not sure which planet Debian sits on - Pluto? Venus? Besides, Pluto is not even a planet anymore...

Mike check!

So this is also a test to see if my posts reach Debian Planet correctly. I suspect no one will ever see this on the top of their feeds, since the posts do get there, but with a 10 days delay and with the original date, so they are "sunk" down. The above expiration fixes won't take effect until the 10 days delay is over... But if you did see this as noise, retroactive apologies in advance for the trouble.

If you are reading this from somewhere else and wish to say hi, don't hesitate, it's always nice to hear from my readers.

14 May, 2016 07:47PM

Thadeu Lima de Souza Cascardo

Chromebook Trackpad

Three years ago, I wanted to get a new laptop. I wanted something that could run free software, preferably without blobs, with some good amount of RAM, good battery and very light, something I could carry along with a work laptop. And I didn't want to spend too much. I don't want to make this too long, so in the end, I asked in the store anything that didn't come with Windows installed, and before I was dragged into the Macbook section, I shouted "and no Apple!". That's how I got into the Chromebook section with two options before me.

There was the Chromebook Pixel, too expensive for me, and the Samsung Chromebook, using ARM. Getting a laptop with an ARM processor was interesting for me, because I like playing with different stuff. I looked up if it would be possible to run something other than ChromeOS on it, got the sense that is, it would, and make a call. It does not have too much RAM, but it was cheap. I got an external HD to compensate for the lack of storage (only 16GB eMMC), and that was it.

Wifi does require non-free firmware to be loaded, but booting was a nice surprise. It is not perfect, but I will see if I can get to that another day.

I managed to get Fedora installed, downloading chunks of an image that I could write into the storage. After a while, I backed up home, and installed Debian using debootstrap.

Recently, after an upgrade from wheezy to jessie, things stopped working. systemd would not mount the most basic partitions and would simply stop very early in the boot process. That's a story on my backlog as well, that I plan to tell soon, since I believe this connects with supporting Debian on mobile devices.

After fixing some things, I decided to try libinput instead of synaptics for the Trackpad. The Chromebook uses a Cypress APA Trackpad. The driver was upstreamed in Linux 3.9. The Chrome OS ships with Linux 3.4, but had the driver in its branch.

After changing to libinput, I realized clicking did not work. Neither did tapping. I moved back to synaptics, and was reminded things didn't work too well with that either. I always had to enable tapping.

I have some experience with input devices. I wrote drivers, small applications reacting to some events, and some uinput userspace drivers as well. I like playing with that subsystem a lot. But I don't have too much experience with multitouch and libinput is kind of new for me too.

I got my hands on the code and found out there is libinput-debug-events. It will show you how libinput translates evdev events. I clicked on the Trackpad and got nothing but some pointer movements. I tried evtest and there were some multitouch events I didn't understand too well, but it looked like there were important events there that I thought libinput should have recognized.

I tried reading some of libinput code, but didn't get too far before I tried something else. But then, I had to let this exercise for another day. Today, I decided to do it again. Now, with some fresh eyes, I looked at the driver code. It showed support for left, right and middle buttons. But maybe my device doesn't support it, because I don't remember seeing it on evtest when clicking the Trackpad. I also understood better the other multitouch events, they were just saying how many fingers there were and what was the position of which one of them. In the case of a single finger, you still get an identifier. For better understanding of all this, reading Documentation/input/event-codes.txt and Documentation/input/multi-touch-protocol.txt is recommended.

So, in trying to answer if libinput needs to handle my devices events properly, or handle my device specially, or if the driver requires changes, or what else I can do to have a better experience with this Trackpad, things were tending to the driver and device. Then, after running evtest, I noticed a BTN_LEFT event. OK, so the device and driver support it, what is libinput doing with that? Running evtest and libinput-debug-events at the same time, I found out the problem. libinput was handling BTN_LEFT correctly, but the driver was not reporting it all the time.

By going through the driver, it looks like this is either a firmware or a hardware problem. When you get the click response, sound and everything, the drivers will not always report it. It could be pressure, eletrical contact, I can't tell for sure. But the driver does not check for anything but what the firmware has reported, so it's not the driver.

A very interesting I found out is that you can read and write the firmware. I dumped it to a file, but still could not analyze what it is. There are some commands to put the driver into some bootloader state, so maybe it's possible to play with the firmware without bricking the device, though I am not sure yet. Even then, the problem might not be fixable by just changing the firmware.

So, I left with the possibility of using tapping, which was not working with libinput. Grepping at the code, I found out by libinput documentation that tapping needs to be enabled. The libinput xorg driver supports that. Just set the Tapping option to true and that's it.

So, now I am a happy libinput user, with some of the same issues I had before with synaptics, but something you get used to. And I have a new firmware in front of me that maybe we could tackle by some reverse engineering.

14 May, 2016 03:26PM

Russell Coker

Xen CPU Use per Domain again

8 years ago I wrote a script to summarise Xen CPU use per domain [1]. Since then changes to Xen required changes to the script. I have new versions for Debian/Wheezy (Xen 4.1) and Debian/Jessie (Xen 4.4).

Here’s a new script for Debian/Wheezy:

#!/usr/bin/perl
use strict;

open(LIST, "xm list --long|") or die "Can't get list";

my $name = "Dom0";
my $uptime = 0.0;
my $cpu_time = 0.0;
my $total_percent = 0.0;
my $cur_time = time();

open(UPTIME, "</proc/uptime") or die "Can't open /proc/uptime";
my @arr = split(/ /, <UPTIME>);
$uptime = $arr[0];
close(UPTIME);

my %all_cpu;

while(<LIST>)
{
  chomp;
  if($_ =~ /^\)/)
  {
    my $cpu = $cpu_time / $uptime * 100.0;
    if($name =~ /Domain-0/)
    {
      printf("%s uses %.2f%% of one CPU\n", $name, $cpu);
    }
    else
    {
      $all_cpu{$name} = $cpu;
    }
    $total_percent += $cpu;
    next;
  }
  $_ =~ s/\).*$//;
  if($_ =~ /start_time /)
  {
    $_ =~ s/^.*start_time //;
    $uptime = $cur_time – $_;
    next;
  }
  if($_ =~ /cpu_time /)
  {
    $_ =~ s/^.*cpu_time //;
    $cpu_time = $_;
    next;
  }
  if($_ =~ /\(name /)
  {
    $_ =~ s/^.*name //;
    $name = $_;
    next;
  }
}
close(LIST);

sub hashValueDescendingNum {
  $all_cpu{$b} <=> $all_cpu{$a};
}

my $key;

foreach $key (sort hashValueDescendingNum (keys(%all_cpu)))
{
  printf("%s uses %.2f%% of one CPU\n", $key, $all_cpu{$key});
}

printf("Overall CPU use approximates %.1f%% of one CPU\n", $total_percent);

Here’s the script for Debian/Jessie:

#!/usr/bin/perl

use strict;

open(UPTIME, "xl uptime|") or die "Can't get uptime";
open(LIST, "xl list|") or die "Can't get list";

my %all_uptimes;

while(<UPTIME>)
{
  chomp $_;

  next if($_ =~ /^Name/);
  $_ =~ s/ +/ /g;

  my @split1 = split(/ /, $_);
  my $dom = $split1[0];
  my $uptime = 0;
  my $time_ind = 2;
  if($split1[3] eq "days,")
  {
    $uptime = $split1[2] * 24 * 3600;
    $time_ind = 4;
  }
  my @split2 = split(/:/, $split1[$time_ind]);
  $uptime += $split2[0] * 3600 + $split2[1] * 60 + $split2[2];
  $all_uptimes{$dom} = $uptime;
}
close(UPTIME);

my $total_percent = 0;

while(<LIST>)
{
  chomp $_;

  my $dom = $_;
  $dom =~ s/ .*$//;

  if ( $_ =~ /(\d+)\.[0-9]$/ )
  {
    my $percent = $1 / $all_uptimes{$dom} * 100.0;
    $total_percent += $percent;
    printf("%s uses %.2f%% of one CPU\n", $dom, $percent);
  }
  else
  {
    next;
  }
}

printf("Overall CPU use approximates  %.1f%% of one CPU\n", $total_percent);

14 May, 2016 09:30AM by etbe

hackergotchi for Michal Čihař

Michal Čihař

Fifteen years with phpMyAdmin and free software

Today it's fifteen years from my first contribution to free software. I've changed several jobs since that time, all of them involved quite a lot of free software and now I'm fully working on free software.

The first contribution happened to be on phpMyAdmin and did consist of Czech translation:

Subject: Updated Czech translation of phpMyAdmin
From: Michal Cihar <cihar@email.cz>
To: swix@users.sourceforge.net
Date: Mon, 14 May 2001 11:23:36 +0200
X-Mailer: KMail [version 1.2]

Hi

I've updated (translated few added messages) Czech translation of phpMyAdmin. 
I send it to you in two encodings, because I thing that in distribution 
should be included version in ISO-8859-2 which is more standard than Windows 
1250.

Regards
    Michal Cihar

Many other contributions came afterwards, several projects died on the way, but it has been a great ride so far. To see some of these you can look at my software page which contains both current and past projects and also includes later opensourced tools I've created earlier (mostly for Windows).

These days you can find me being active on phpMyAdmin, Gammu, python-gammu and Wammu, Debian and Weblate.

Filed under: Debian English phpMyAdmin SUSE | 2 comments

14 May, 2016 09:23AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Debugging backdoors and the usual software distribution for embedded-oriented systems

In the ARM world, to which I am still mostly a newcomer (although I've been already playing with ARM machines for over two years, I am a complete newbie compared to my Debian friends who live and breathe that architecture), the most common way to distribute operating systems is to distribute complete, already-installed images. I have ranted in the past on how those images ought to be distributed.

Some time later, I also discussed on my blog on how most of this hardware requires unauditable binary blobs and other non-upstreamed modifications to Linux.

In the meanwhile, I started teaching on the Embedded Linux diploma course in Facultad de Ingeniería, UNAM. It has been quite successful — And fun.

Anyway, one of the points we make emphasis on to our students is that the very concept of embedded makes the mere idea of downloading a pre-built, 4GB image, loaded with a (supposedly lightweight, but far fatter than my usual) desktop environment and whatnot an irony.

As part of the "Linux Userspace" and "Boot process" modules, we make a lot of emphasis on how to build a minimal image. And even leaving installed size aside, it all boils down to trust. We teach mainly four different ways of setting up a system:

  • Using our trusty Debian Installer in the (unfortunately few) devices where it is supported
  • Installing via Debootstrap, as I did in my CuBox-i tutorial (note that the tutorial is nowadays obsolete. The CuBox-i can boot with Debian Installer!) and just keeping the boot partition (both for u-boot and for the kernel) of the vendor-provided install
  • Building a barebones system using the great Buildroot set of scripts and hacks
  • Downloading a full, but minimal, installed image, such as OpenWRT (I have yet to see what's there about its fork, LEDE)

Now... In the past few days, a huge vulnerability / oversight was discovered and made public, supporting my distrust of distribution forms that do not come from, well... The people we already know and trust to do this kind of work!

Most current ARM chips cannot run with the stock, upstream Linux kernel. Then require a set of patches that different vendors pile up to support their basic hardware (remember those systems are almost always systems-on-a-chip (SoC)). Some vendors do take the hard work to try to upstream their changes — that is, push the changes they did to the kernel for inclusion in mainstream Linux. This is a very hard task, and many vendors just abandon it.

So, in many cases, we are stuck running with nonstandard kernels, full with huge modifications... And we trust them to do things right. After all, if they are knowledgeable enough to design a SoC, they should do at least decent kernel work, right?

Turns out, it's far from the case. I have a very nice and nifty Banana Pi M3, based on the Allwinner A83T SoC. 2GB RAM, 8 ARM cores... A very nice little system, almost usable as a desktop. But it only boots with their modified 3.4.x kernel.

This kernel has a very ugly flaw: A debugging mode left open, that allows any local user to become root. Even on a mostly-clean Debian system, installed by a chrooted debootstrap:

  1. Debian GNU/Linux 8 bananapi ttyS0
  2.  
  3. banana login: gwolf
  4. Password:
  5.  
  6. Last login: Thu Sep 24 14:06:19 CST 2015 on ttyS0
  7. Linux bananapi 3.4.39-BPI-M3-Kernel #9 SMP PREEMPT Wed Sep 23 15:37:29 HKT 2015 armv7l
  8.  
  9. The programs included with the Debian GNU/Linux system are free software;
  10. the exact distribution terms for each program are described in the
  11. individual files in /usr/share/doc/*/copyright.
  12.  
  13. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
  14. permitted by applicable law.
  15.  
  16. gwolf@banana:~$ id
  17. uid=1001(gwolf) gid=1001(gwolf) groups=1001(gwolf),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev)
  18. gwolf@banana:~$ echo rootmydevice > /proc/sunxi_debug/sunxi_debug
  19. gwolf@banana:~$ id
  20. groups=0(root),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),1001(gwolf)

Why? Oh, well, in this kernel somebody forgot to comment out (or outright remove!) the sunxi-debug.c file, or at the very least, a horrid part of code therein (it's a very small, simple file):

  1. if(!strncmp("rootmydevice",(char*)buf,12)){
  2. cred = (struct cred *)__task_cred(current);
  3. cred->uid = 0;
  4. cred->gid = 0;
  5. cred->suid = 0;
  6. cred->euid = 0;
  7. cred->euid = 0;
  8. cred->egid = 0;
  9. cred->fsuid = 0;
  10. cred->fsgid = 0;
  11. printk("now you are root\n");
  12. }

Now... Just by looking at this file, many things should be obvious. For example, this is not only dangerous and lazy (it exists so developers can debug by touching a file instead of... typing a password?), but also goes against the kernel coding guidelines — the file is not documented nor commented at all. Peeking around other files in the repository, it gets obvious that many files lack from this same basic issue — and having this upstreamed will become a titanic task. If their programmers tried to adhere to the guidelines to begin with, integration would be a much easier path. Cutting the wrong corners will just increase the needed amount of work.

Anyway, enough said by me. Some other sources of information:

There are surely many other mentions of this. I just had to repeat it for my local echo chamber, and for future reference in class! ;-)

14 May, 2016 12:58AM by gwolf

May 13, 2016

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2016 (pretest) hits Debian/unstable

The sources of TeX Live binaries are now (hopefully) frozen, and barring unpleasant surprises, these will be code going into the final release (one fix for luatex is coming, though). Thus, I thought it is time to upload TeX Live 2016 packages to Debian/unstable to expose them to a wider testing area – packages in experimental receive hardly any testing.

texlive-2016-debian-pretest

The biggest changes are with Luatex, where APIs were changed fundamentally and practically each package using luatex specific code needs to be adjusted. Most of the package authors have already uploaded fixed versions to CTAN and thus to TeX Live, but some are surely still open. I have taken the step to provide driver files for pgf and pgfplots to support pgf with luatex (as I need it myself).

One more thing to be mentioned is that the binaries finally bring support for reproducible builds by supporting the SOURCE_DATE_EPOCH environment variable.

Please send bug reports, suggestions, and improvements (patches welcome!) to improve the quality of the packages. In particular, lintian complains a lot about various man page problems. If someone wants to go through all that it would help a lot. Details on request.

Other than that, many packages have been updated or added since the last Debian packages, here are the incomplete lists (I had accidentally deleted the tlmgr.log file at some point):

new: acmart, chivo, coloring, dvisvgm-def, langsci, makebase, pbibtex-base, platex, ptex-base, ptex-fonts, rosario, uplatex, uptex-base, uptex-fonts.

updated: achemso, acro, arabluatex, arydshln, asymptote, babel-french, biblatex-ieee, bidi, bookcover, booktabs, bxjscls, chemformula, chemmacros, cslatex, csplain, cstex, dtk, dvips, epspdf, fibeamer, footnotehyper, glossaries, glossaries-extra, gobble, graphics, gregoriotex, hyperref, hyperxmp, jadetex, jslectureplanner, koma-script, kpathsea, latex-bin, latexmk, lollipop, luaotfload, luatex, luatexja, luatexko, mathastext, mcf2graph, mex, microtype, msu-thesis, m-tx, oberdiek, pdftex, pdfx, pgf, pgfplots, platex, pmx, pst-cie, pst-func, pst-ovl, pst-plot, ptex, ptex-fonts, reledmac, shdoc, substances, tasks, tetex, tools, uantwerpendocs, ucharclasses, uplatex, uptex, uptex-fonts, velthuis, xassoccnt, xcolor, xepersian, xetex, xgreek, xmltex.

Enjoy.

13 May, 2016 01:22AM by Norbert Preining

May 12, 2016

Antoine Beaupré

Notmuch, offlineimap and Sieve setup

I've been using Notmuch since about 2011, switching away from Mutt to deal with the monstrous amount of emails I was, and still am dealing with on the computer. I have contributed a few patches and configs on the Notmuch mailing list, but basically, I have given up on merging patches, and instead have a custom config in Emacs that extend it the way I want. In the last 5 years, Notmuch has progressed significantly, so I haven't found the need to patch it or make sweeping changes.

The huge INBOX of death

The one thing that is problematic with my use of Notmuch is that I end up with a ridiculously large INBOX folder. Before the cleanup I did this morning, I had over 10k emails in there, out of about 200k emails overall.

Since I mostly work from my laptop these days, the Notmuch tags are only on the laptop, and not propagated to the server. This makes accessing the mail spool directly, from webmail or simply through a local client (say Mutt) on the server, really inconvenient, because it has to load a very large spool of mail, which is very slow in Mutt. Even worse, a bunch of mail that was archived in Notmuch shows up in the spool because it's just removed tags in Notmuch: the mails are still in the inbox, even though they are marked as read.

So I was hoping that Notmuch would help me deal with the giant inbox of death problem, but in fact, when I don't use Notmuch, it actually makes the problem worse. Today, I did a bunch of improvements to my setup to fix that.

The first thing I did was to kill procmail, which I was surprised to discover has been dead for over a decade. I switched over to Sieve for filtering, having already switched to Dovecot a while back on the server. I tried to use the procmail2sieve.pl conversion tool but it didn't work very well, so I basically rewrote the whole file. Since I was mostly using Notmuch for filtering, there wasn't much left to convert.

Sieve filtering

But this is where things got interesting: Sieve is so simpler to use and more intuitive that I started doing more interesting stuff in bridging the filtering system (Sieve) with the tagging system (Notmuch). Basically, I use Sieve to split large chunks of emails off my main inbox, to try to remove as much spam, bulk email, notifications and mailing lists as possible from the larger flow of emails. Then Notmuch comes in and does some fine-tuning, assigning tags to specific mailing lists or topics, and being generally the awesome search engine that I use on a daily basis.

Dovecot and Postfix configs

For all of this to work, I had to tweak my mail servers to talk sieve. First, I enabled sieve in Dovecot:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -44,5 +44,5 @@

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).
-  #mail_plugins = $mail_plugins
+  mail_plugins = $mail_plugins sieve
 }

Then I had to switch from procmail to dovecot for local delivery, that was easy, in Postfix's perennial main.cf:

#mailbox_command = /usr/bin/procmail -a "$EXTENSION"
mailbox_command = /usr/lib/dovecot/dovecot-lda -a "$RECIPIENT"

Note that dovecot takes the full recipient as an argument, not just the extension. That's normal. It's clever, it knows that kind of stuff.

One last tweak I did was to enable automatic mailbox creation and subscription, so that the automatic extension filtering (below) can create mailboxes on the fly:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -37,10 +37,10 @@
 #lda_original_recipient_header =

 # Should saving a mail to a nonexistent mailbox automatically create it?
-#lda_mailbox_autocreate = no
+lda_mailbox_autocreate = yes

 # Should automatically created mailboxes be also automatically subscribed?
-#lda_mailbox_autosubscribe = no
+lda_mailbox_autosubscribe = yes

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).

Sieve rules

Then I had to create a Sieve ruleset. That thing lives in ~/.dovecot.sieve, since I'm running Dovecot. Your provider may accept an arbitrary ruleset like this, or you may need to go through a web interface, or who knows. I'm assuming you're running Dovecot and have a shell from now on.

The first part of the file is simply to enable a bunch of extensions, as needed:

# Sieve Filters
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples
# https://tools.ietf.org/html/rfc5228
require "fileinto";
require "envelope";
require "variables";
require "subaddress";
require "regex";
require "vacation";
require "vnd.dovecot.debug";

Some of those are not used yet, for example I haven't tested the vacation module yet, but I have good hopes that I can use it as a way to announce a special "urgent" mailbox while I'm traveling. The rationale is to have a distinct mailbox for urgent messages that is announced in the autoreply, that hopefully won't be parsable by bots.

Spam filtering

Then I filter spam using this fairly standard expression:

########################################################################
# spam 
# possible improvement, server-side:
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Filtering_using_the_spamtest_and_virustest_extensions
if header :contains "X-Spam-Flag" "YES" {
  fileinto "junk";
  stop;
} elsif header :contains "X-Spam-Level" "***" {
  fileinto "greyspam";
  stop;
}

This puts stuff into the junk or greyspam folder, based on the severity. I am very aggressive with spam: stuff often ends up in the greyspam folder, which I need to check from time to time, but it beats having too much spam in my inbox.

Mailing lists

Mailing lists are generally put into a lists folder, with some mailing lists getting their own folder:

########################################################################
# lists
# converted from procmail
if header :contains "subject" "FreshPorts" {
    fileinto "freshports";
} elsif header :contains "List-Id" "alternc.org" {
    fileinto "alternc";
} elsif header :contains "List-Id" "koumbit.org" {
    fileinto "koumbit";
} elsif header :contains ["to", "cc"] ["lists.debian.org",
                                       "anarcat@debian.org"] {
    fileinto "debian";
# Debian BTS
} elsif exists "X-Debian-PR-Message" {
    fileinto "debian";
# default lists fallback
} elsif exists "List-Id" {
    fileinto "lists";
}

The idea here is that I can safely subscribe to lists without polluting my mailbox by default. Further processing is done in Notmuch.

Extension matching

I also use the magic +extension tag on emails. If you send email to, say, foo+extension@example.com then the emails end up in the foo folder. This is done with the help of the following recipe:

########################################################################
# wildcard +extension
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Plus_Addressed_mail_filtering
if envelope :matches :detail "to" "*" {
  # Save name in ${name} in all lowercase except for the first letter.
  # Joe, joe, jOe thus all become 'Joe'.
  set :lower "name" "${1}";
  fileinto "${name}";
  #debug_log "filed into mailbox ${name} because of extension";
  stop;
}

This is actually very effective: any time I register to a service, I try as much as possible to add a +extension that describe the service. Of course, spammers and marketers (it's the same really) are free to drop the extension and I suspect a lot of them do, but it helps with honest providers and this actually sorts a lot of stuff out of my inbox into topically-defined folders.

It is also a security issue: someone could flood my filesystem with tons of mail folders, which would cripple the IMAP server and eat all the inodes, 4 times faster than just sending emails. But I guess I'll cross that bridge when I get there: anyone can flood my address and I have other mechanisms to deal with this.

The trick is to then assign tags to all folders so that they appear in the Notmuch-emacs welcome view:

echo tagging folders
for folder in $(ls -ad $HOME/Maildir/${PREFIX}*/ | egrep -v "Maildir/${PREFIX}(feeds.*|Sent.*|INBOX/|INBOX/Sent)\$"); do
    tag=$(echo $folder | sed 's#/$##;s#^.*/##')
    notmuch tag +$tag -inbox tag:inbox and not tag:$tag and folder:${PREFIX}$tag
done

This is part of my notmuch-tag script that includes a lot more fine-tuned filtering, detailed below.

Automated reports filtering

Another thing I get a lot of is machine-generated "spam". Well, it's not commercial spam, but it's a bunch of Nagios, cron jobs, and god knows what software thinks it's important to send me emails every day. I get a lot less of those these days since I'm off work at Koumbit, but still, those can be useful for others as well:

if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "nagios",
                                        "changes report",
                                        "run output",
                                        "[Systraq]",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "backupninja",
                                        "DenyHosts report",
                                        "Debian security status",
                                        "apt-listchanges"
                                        ],
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
                                      "logcheck@"])
    {
    fileinto "rapports";
}
# imported from procmail
elsif header :comparator "i;octet" :contains "Subject" "Cron" {
  if header :regex :comparator "i;octet"  "From" ".*root@" {
        fileinto "rapports";
  }
}
elsif header :comparator "i;octet" :contains "To" "root@" {
  if header :regex :comparator "i;octet"  "Subject" "\\*\\*\\* SECURITY" {
        fileinto "rapports";
  }
}
elsif header :contains "Precedence" "bulk" {
    fileinto "bulk";
}

Refiltering emails

Of course, after all this I still had thousands of emails in my inbox, because the sieve filters apply only on new emails. The beauty of Sieve support in Dovecot is that there is a neat sieve-filter command that can reprocess an existing mailbox. That was a lifesaver. To run a specific sieve filter on a mailbox, I simply run:

sieve-filter .dovecot.sieve INBOX 2>&1 | less

Well, this doesn't do anything. To really execute the filters, you need the -e flags, and to write to the INBOX for real, you need the -w flag as well, so the real run looks something more like this:

sieve-filter -e -W -v .dovecot.sieve INBOX > refilter.log 2>&1

The funky output redirects are necessary because this outputs a lot of crap. Also note that, unfortunately, the fake run output differs from the real run and is actually more verbose, which makes it really less useful than it could be.

Archival

I also usually archive my mails every year, rotating my mailbox into an Archive.YYYY directory. For example, now all mails from 2015 are archived in a Archive.2015 directory. I used to do this with Mutt tagging and it was a little slow and error-prone. Now, i simply have this Sieve script:

require ["variables","date","fileinto","mailbox", "relational"];

# Extract date info
if currentdate :matches "year" "*" { set "year" "${1}"; }

if date :value "lt" :originalzone "date" "year" "${year}" {
  if date :matches "received" "year" "*" {
    # Archive Dovecot mailing list items by year and month.
    # Create folder when it does not exist.
    fileinto :create "Archive.${1}";
  }
}

I went from 15613 to 1040 emails in my real inbox with this process (including refiltering with the default filters as well).

Notmuch configuration

My Notmuch configuration is a in three parts: I have small settings in ~/.notmuch-config. The gist of it is:

[new]
tags=unread;inbox;
ignore=

#[maildir]
# synchronize_flags=true
# tentative patch that was refused upstream
# http://mid.gmane.org/1310874973-28437-1-git-send-email-anarcat@koumbit.org
#reckless_trash=true

[search]
exclude_tags=deleted;spam;

I omitted the fairly trivial [user] section for privacy reasons and [database] for declutter.

Then I have a notmuch-tag script symlinked into ~/Maildir/.notmuch/hooks/post-new. It does way too much stuff to describe in details here, but here are a few snippets:

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

This sets a variable that makes the script work on my laptop (angela), where mailboxes are in Maildir/Anarcat/foo or the server, where mailboxes are in Maildir/.foo.

I also have special rules to tag my RSS feeds, which are generated by feed2imap, which is documented shortly below:

echo tagging feeds
( cd $HOME/Maildir/ && for feed in ${PREFIX}feeds.*; do
    name=$(echo $feed | sed "s#${PREFIX}feeds\\.##")
    notmuch tag +feeds +$name -inbox folder:$feed and not tag:feeds
done )

Another example that would be useful is how to tag mailing lists, for example, this removes the inbox tag and adds the notmuch tags to emails from the notmuch mailing list.

notmuch tag +lists +notmuch      -inbox tag:inbox and "to:notmuch@notmuchmail.org"

Finally, I have a bunch of special keybindings in ~/.emacs.d/notmuch-config.el:

;; autocompletion
(eval-after-load "notmuch-address"
  '(progn
     (notmuch-address-message-insinuate)))

; use fortune for signature, config is in custom
(add-hook 'message-setup-hook 'fortune-to-signature)
; don't remember what that is
(add-hook 'notmuch-show-hook 'visual-line-mode)

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; keymappings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-key notmuch-show-mode-map "S"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("+spam" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "S"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+spam" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "H"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("-spam"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "H"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-spam") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "l" 
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "u"
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-deleted") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "d"
  (lambda (&optional beg end)
    "delete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+deleted" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "d"
  (lambda ()
    "delete current message and advance"
    (interactive)
    (notmuch-show-tag '("+deleted" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

;; https://notmuchmail.org/emacstips/#index17h2
(define-key notmuch-show-mode-map "b"
  (lambda (&optional address)
    "Bounce the current message."
    (interactive "sBounce To: ")
    (notmuch-show-view-raw-message)
    (message-resend address)
    (kill-buffer)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; my custom notmuch functions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun anarcat/notmuch-search-next-thread ()
  "Skip to next message from region or point

This is necessary because notmuch-search-next-thread just starts
from point, whereas it seems to me more logical to start from the
end of the region."
  ;; move line before the end of region if there is one
  (unless (= beg end)
    (goto-char (- end 1)))
  (notmuch-search-next-thread))

;; Linking to notmuch messages from org-mode
;; https://notmuchmail.org/emacstips/#index23h2
(require 'org-notmuch nil t)

(message "anarcat's custom notmuch config loaded")

This is way too long: in my opinion, a bunch of that stuff should be factored in upstream, but some features have been hard to get in. For example, Notmuch is really hesitant in marking emails as deleted. The community is also very strict about having unit tests for everything, which makes writing new patches a significant challenge for a newcomer, which will often need to be familiar with both Elisp and C. So for now I just have those configs that I carry around.

Emails marked as deleted or spam are processed with the following script named notmuch-purge which I symlink to ~/Maildir/.notmuch/hooks/pre-new:

#!/bin/sh

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

echo moving tagged spam to the junk folder
notmuch search --output=files tag:spam \
        and not folder:${PREFIX}junk \
        and not folder:${PREFIX}greyspam \
        and not folder:Koumbit/INBOX \
        and not path:Koumbit/** \
    | while read file; do
          mv "$file" "$HOME/Maildir/${PREFIX}junk/cur"
      done

echo unconditionnally deleting deleted mails
notmuch search --output=files tag:deleted | xargs -r rm

Oh, and there's also customization for Notmuch:

;; -*- mode: emacs-lisp; auto-recompile: t; -*-
(custom-set-variables
 ;; from https://anarc.at/sigs.fortune
 '(fortune-file "/home/anarcat/.mutt/sigs.fortune")
 '(message-send-hook (quote (notmuch-message-mark-replied)))
 '(notmuch-address-command "notmuch-address")
 '(notmuch-always-prompt-for-sender t)
 '(notmuch-crypto-process-mime t)
 '(notmuch-fcc-dirs
   (quote
    ((".*@koumbit.org" . "Koumbit/INBOX.Sent")
     (".*" . "Anarcat/Sent"))))
 '(notmuch-hello-tag-list-make-query "tag:unread")
 '(notmuch-message-headers (quote ("Subject" "To" "Cc" "Bcc" "Date" "Reply-To")))
 '(notmuch-saved-searches
   (quote
    ((:name "inbox" :query "tag:inbox and not tag:koumbit and not tag:rt")
     (:name "unread inbox" :query "tag:inbox and tag:unread")
     (:name "unread" :query "tag:unred")
     (:name "freshports" :query "tag:freshports and tag:unread")
     (:name "rapports" :query "tag:rapports and tag:unread")
     (:name "sent" :query "tag:sent")
     (:name "drafts" :query "tag:draft"))))
 '(notmuch-search-line-faces
   (quote
    (("deleted" :foreground "red")
     ("unread" :weight bold)
     ("flagged" :foreground "blue"))))/
 '(notmuch-search-oldest-first nil)
 '(notmuch-show-all-multipart/alternative-parts nil)
 '(notmuch-show-all-tags-list t)
 '(notmuch-show-insert-text/plain-hook
   (quote
    (notmuch-wash-convert-inline-patch-to-part notmuch-wash-tidy-citations notmuch-wash-elide-blank-lines notmuch-wash-excerpt-citations)))
 )

I think that covers it.

Offlineimap

So of course the above works well on the server directly, but how do run Notmuch on a remote machine that doesn't have access to the mail spool directly? This is where OfflineIMAP comes in. It allows me to incrementally synchronize a local Maildir folder hierarchy with a a remote IMAP server. I am assuming you already have an IMAP server configured, since you already configured Sieve above.

Note that other synchronization tools exist. The other popular one is isync but I had trouble migrating to it (see courriels for details) so for now I am sticking with OfflineIMAP.

The configuration is fairly simple:

[general]
accounts = Anarcat
ui = Blinkenlights
maxsyncaccounts = 3

[Account Anarcat]
localrepository = LocalAnarcat
remoterepository = RemoteAnarcat
# refresh all mailboxes every 10 minutes
autorefresh = 10
# run notmuch after refresh
postsynchook = notmuch new
# sync only mailboxes that changed
quick = -1
## possible optimisation: ignore mails older than a year
#maxage = 365

# local mailbox location
[Repository LocalAnarcat]
type = Maildir
localfolders = ~/Maildir/Anarcat/

# remote IMAP server
[Repository RemoteAnarcat]
type = IMAP
remoteuser = anarcat
remotehost = anarc.at
ssl = yes
# without this, the cert is not verified (!)
sslcacertfile = /etc/ssl/certs/DST_Root_CA_X3.pem
# do not sync archives
folderfilter = lambda foldername: not re.search('(Sent\.20[01][0-9]\..*)', foldername) and not re.search('(Archive.*)', foldername)
# and only subscribed folders
subscribedonly = yes
# don't reconnect all the time
holdconnectionopen = yes
# get mails from INBOX immediately, doesn't trigger postsynchook
idlefolders = ['INBOX']

Critical parts are:

  • postsynchook: obviously, we want to run notmuch after fetching mail
  • idlefolders: receives emails immediately without waiting for the longer autorefresh delay, which means that most mailboxes don't see new emails until 10 minutes in the worst case. unfortunately, doesn't run the postsynchook so I need to hit G in Emacs to see new mail
  • quick=-1, subscribedonly, holdconnectionopen: makes most runs much, much faster as it skips unchanged or unsubscribed folders and keeps the connection to the server

The other settings should be self-explanatory.

RSS feeds

I gave up on RSS readers, or more precisely, I merged RSS feeds and email. The first time I heard of this, it sounded like a horrible idea, because it means yet more emails! But with proper filtering, it's actually a really nice way to process emails, since it leverages the distributed nature of email.

For this I use a fairly standard feed2imap, although I do not deliver to an IMAP server, but straight to a local Maildir. The configuration looks like this:

---
include-images: true
target-refix: &target "maildir:///home/anarcat/Maildir/.feeds."
feeds:
- name: Planet Debian
  url: http://planet.debian.org/rss20.xml
  target: [ *target, 'debian-planet' ]

I have obviously more feeds, the above is just and example. This will deliver the feeds as emails in one mailbox per feed, in ~/Maildir/.feeds.debian-planet, in the above example.

Troubleshooting

You will fail at writing the sieve filters correctly, and mail will (hopefully?) fall through to your regular mailbox. Syslog will tell you things fail, as expected, and details are in your .dovecot.sieve.log file in your home directory.

I also enabled debugging on the Sieve module

--- a/dovecot/conf.d/90-sieve.conf
+++ b/dovecot/conf.d/90-sieve.conf
@@ -51,6 +51,7 @@ plugin {
        # deprecated imapflags extension in addition to all extensions were already
   # enabled by default.
   #sieve_extensions = +notify +imapflags
+  sieve_extensions = +vnd.dovecot.debug

   # Which Sieve language extensions are ONLY available in global scripts. This
   # can be used to restrict the use of certain Sieve extensions to administrator

This allowed me to use debug_log function in the rulesets to output stuff directly to the logfile.

Further improvements

Of course, this is all done on the commandline, but that is somewhat expected if you are already running Notmuch. Of course, it would be much easier to edit those filters through a GUI. Roundcube has a nice Sieve plugin, and Thunderbird also has such a plugin as well. Since Sieve is a standard, there's a bunch of clients available. All those need you to setup some sort of thing on the server, which I didn't bother doing yet.

And of course, a key improvement would be to have Notmuch synchronize its state better with the mailboxes directly, instead of having the notmuch-purge hack above. Dovecot and Maildir formats support up to 26 flags, and there were discussions about using those flags to synchronize with notmuch tags so that multiple notmuch clients can see the same tags on different machines transparently.

This, however, won't make Notmuch work on my phone or webmail or any other more generic client: for that, Sieve rules are still very useful.

I still don't have webmail setup at all: so to read email, I need an actual client, which is currently my phone, which means I need to have Wifi access to read email. "Internet Cafés" or "this guy's computer" won't work as well, although I can always use ssh to login straight to the server and read mails with Mutt.

I am also considering using X509 client certificates to authenticate to the mail server without a passphrase. This involves configuring Postfix, which seems simple enough. Dovecot's configuration seems a little more involved and less well documented. It seems that both [OfflimeIMAP][] and K-9 mail support client-side certs. OfflineIMAP prompts me for the password so it doesn't get leaked anywhere. I am a little concerned about building yet another CA, but I guess it would not be so hard...

The server side of things needs more documenting, particularly the spam filters. This is currently spread around this wiki, mostly in configuration.

Security considerations

The whole purpose of this was to make it easier to read my mail on other devices. This introduces a new vulnerability: someone may steal that device or compromise it to read my mail, impersonate me on different services and even get a shell on the remote server.

Thanks to the two-factor authentication I setup on the server, I feel a little more confident that just getting the passphrase to the mail account isn't sufficient anymore in leveraging shell access. It also allows me to login with ssh on the server without trusting the machine too much, although that only goes so far... Of course, sudo is then out of the question and I must assume that everything I see is also seen by the attacker, which can also inject keystrokes and do all sorts of nasty things.

Since I also connected my email account on my phone, someone could steal the phone and start impersonating me. The mitigation here is that there is a PIN for the screen lock, and the phone is encrypted. Encryption isn't so great when the passphrase is a PIN, but I'm working on having a better key that is required on reboot, and the phone shuts down after 5 failed attempts. This is documented in my phone setup.

Client-side X509 certificates further mitigates those kind of compromises, as the X509 certificate won't give shell access.

Basically, if the phone is lost, all hell breaks loose: I need to change the email password (or revoke the certificate), as I assume the account is about to be compromised. I do not trust Android security to give me protection indefinitely. In fact, one could argue that the phone is already compromised and putting the password there already enabled a possible state-sponsored attacker to hijack my email address. This is why I have an OpenPGP key on my laptop to authenticate myself for critical operations like code signatures.

The risk of identity theft from the state is, after all, a tautology: the state is the primary owner of identities, some could say by definition. So if a state-sponsored attacker would like to masquerade as me, they could simply issue a passport under my name and join a OpenPGP key signing party, and we'd have other problems to deal with, namely, proper infiltration counter-measures and counter-snitching.

12 May, 2016 11:29PM

Ingo Juergensmann

Xen randomly crashing server - part 2

Some weeks ago I blogged about "Xen randomly crashing server". The problem back then was that I couldn't get any information why the server reboots. Using a netconsole was not possible, because netconsole refused to work with the bridge that is used for Xen networking. Luckily my colocation partner rrbone.net connected the second network port of my server to the network so that I could use eth1 instead of the bridged eth0 for netconsole.

Today the server crashed several times and I was able to collect some more information than just the screenshots from IPMI/KVM console as shown in my last blog entry (full netconsole output is attached as a file): 

May 12 11:56:39 31.172.31.251 [829681.040596] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64 #1 Debian 3.16.7-ckt25-2
May 12 11:56:39 31.172.31.251 [829681.040647] Hardware name: Supermicro X9SRE/X9SRE-3F/X9SRi/X9SRi-3F/X9SRE/X9SRE-3F/X9SRi/X9SRi-3F, BIOS 3.0a 01/03/2014
May 12 11:56:39 31.172.31.251 [829681.040701] task: ffffffff8181a460 ti: ffffffff81800000 task.ti: ffffffff81800000
May 12 11:56:39 31.172.31.251 [829681.040749] RIP: e030:[<ffffffff812b7e56>]
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.040802] RSP: e02b:ffff880280e03a58  EFLAGS: 00010286
May 12 11:56:39 31.172.31.251 [829681.040834] RAX: ffff88026eec9070 RBX: ffff88023c8f6b00 RCX: 00000000000000ee
May 12 11:56:39 31.172.31.251 [829681.040880] RDX: 00000000000004a0 RSI: ffff88006cd1f000 RDI: ffff88026eec9422
May 12 11:56:39 31.172.31.251 [829681.040927] RBP: ffff880280e03b38 R08: 00000000000006c0 R09: ffff88026eec9062
May 12 11:56:39 31.172.31.251 [829681.040973] R10: 0100000000000000 R11: 00000000af9a2116 R12: ffff88023f440d00
May 12 11:56:39 31.172.31.251 [829681.041020] R13: ffff88006cd1ec66 R14: ffff88025dcf1cc0 R15: 00000000000004a8
May 12 11:56:39 31.172.31.251 [829681.041075] FS:  0000000000000000(0000) GS:ffff880280e00000(0000) knlGS:ffff880280e00000
May 12 11:56:39 31.172.31.251 [829681.041124] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
May 12 11:56:39 31.172.31.251 [829681.041153] CR2: ffff88006cd1f000 CR3: 0000000271ae8000 CR4: 0000000000042660
May 12 11:56:39 31.172.31.251 [829681.041202] Stack:
May 12 11:56:39 31.172.31.251 [829681.041225]  ffffffff814d38ff
May 12 11:56:39 31.172.31.251  ffff88025b5fa400
May 12 11:56:39 31.172.31.251  ffff880280e03aa8
May 12 11:56:39 31.172.31.251  9401294600a7012a
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041287]  0100000000000000
May 12 11:56:39 31.172.31.251  ffffffff814a000a
May 12 11:56:39 31.172.31.251  000000008181a460
May 12 11:56:39 31.172.31.251  00000000000080fe
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041346]  1ad902feff7ac40e
May 12 11:56:39 31.172.31.251  ffff88006c5fd980
May 12 11:56:39 31.172.31.251  ffff224afc3e1600
May 12 11:56:39 31.172.31.251  ffff88023f440d00
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041407] Call Trace:
May 12 11:56:39 31.172.31.251 [829681.041435]  <IRQ>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041441]
May 12 11:56:39 31.172.31.251  [<ffffffff814d38ff>] ? ndisc_send_redirect+0x3bf/0x410
May 12 11:56:39 31.172.31.251 [829681.041506]  [<ffffffff814a000a>] ? ipmr_device_event+0x7a/0xd0
May 12 11:56:39 31.172.31.251 [829681.041548]  [<ffffffff814bc74c>] ? ip6_forward+0x71c/0x850
May 12 11:56:39 31.172.31.251 [829681.041585]  [<ffffffff814c9e54>] ? ip6_route_input+0xa4/0xd0
May 12 11:56:39 31.172.31.251 [829681.041621]  [<ffffffff8141f1a3>] ? __netif_receive_skb_core+0x543/0x750
May 12 11:56:39 31.172.31.251 [829681.041729]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.041771]  [<ffffffffa0585eb2>] ? br_handle_frame_finish+0x1c2/0x3c0 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041821]  [<ffffffffa058c757>] ? br_nf_pre_routing_finish_ipv6+0xc7/0x160 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041872]  [<ffffffffa058d0e2>] ? br_nf_pre_routing+0x562/0x630 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041907]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041955]  [<ffffffff8144fb65>] ? nf_iterate+0x65/0xa0
May 12 11:56:39 31.172.31.251 [829681.041987]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042035]  [<ffffffff8144fc16>] ? nf_hook_slow+0x76/0x130
May 12 11:56:39 31.172.31.251 [829681.042067]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042116]  [<ffffffffa0586220>] ? br_handle_frame+0x170/0x240 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042148]  [<ffffffff8141ee24>] ? __netif_receive_skb_core+0x1c4/0x750
May 12 11:56:39 31.172.31.251 [829681.042185]  [<ffffffff81009f9c>] ? xen_clocksource_get_cycles+0x1c/0x20
May 12 11:56:39 31.172.31.251 [829681.042217]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.042251]  [<ffffffffa063f50f>] ? xenvif_tx_action+0x49f/0x920 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042299]  [<ffffffffa06422f8>] ? xenvif_poll+0x28/0x70 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042331]  [<ffffffff8141f7b0>] ? net_rx_action+0x140/0x240
May 12 11:56:39 31.172.31.251 [829681.042367]  [<ffffffff8106c6a1>] ? __do_softirq+0xf1/0x290
May 12 11:56:39 31.172.31.251 [829681.042397]  [<ffffffff8106ca75>] ? irq_exit+0x95/0xa0
May 12 11:56:39 31.172.31.251 [829681.042432]  [<ffffffff8135a285>] ? xen_evtchn_do_upcall+0x35/0x50
May 12 11:56:39 31.172.31.251 [829681.042469]  [<ffffffff8151669e>] ? xen_do_hypervisor_callback+0x1e/0x30
May 12 11:56:39 31.172.31.251 [829681.042499]  <EOI>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.042506]
May 12 11:56:39 31.172.31.251  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042561]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042592]  [<ffffffff81009e7c>] ? xen_safe_halt+0xc/0x20
May 12 11:56:39 31.172.31.251 [829681.042627]  [<ffffffff8101c8c9>] ? default_idle+0x19/0xb0
May 12 11:56:39 31.172.31.251 [829681.042666]  [<ffffffff810a83e0>] ? cpu_startup_entry+0x340/0x400
May 12 11:56:39 31.172.31.251 [829681.042705]  [<ffffffff81903076>] ? start_kernel+0x497/0x4a2
May 12 11:56:39 31.172.31.251 [829681.042735]  [<ffffffff81902a04>] ? set_init_arg+0x4e/0x4e
May 12 11:56:39 31.172.31.251 [829681.042767]  [<ffffffff81904f69>] ? xen_start_kernel+0x569/0x573
May 12 11:56:39 31.172.31.251 [829681.042797] Code:
May 12 11:56:39 31.172.31.251  <f3>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.043113] RIP
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.043145]  RSP <ffff880280e03a58>
May 12 11:56:39 31.172.31.251 [829681.043170] CR2: ffff88006cd1f000
May 12 11:56:39 31.172.31.251 [829681.043488] ---[ end trace 1838cb62fe32daad ]---
May 12 11:56:39 31.172.31.251 [829681.048905] Kernel panic - not syncing: Fatal exception in interrupt
May 12 11:56:39 31.172.31.251 [829681.048978] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

I'm not that good at reading this kind of output, but to me it seems that ndisc_send_redirect is at fault. When googling for "ndisc_send_redirect" you can find a patch on lkml.org and Debian bug #804079, both seem to be related to IPv6.

When looking at the linux kernel source mentioned in the lkml patch I see that this patch is already applied (line 1510): 

        if (ha) 
                ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha);

So, when the patch was intended to prevent "leading to data corruption or in the worst case a panic when the skb_put failed" it does not help in my case or in the case of #804079.

Any tips are appreciated!

PS: I'll contribute to that bug in the BTS, of course!

AttachmentSize
syslog-xen-crash.txt24.27 KB
Kategorie: 
 

12 May, 2016 06:38PM by ij

hackergotchi for Matthew Garrett

Matthew Garrett

Convenience, security and freedom - can we pick all three?

Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comment count unavailable comments

12 May, 2016 02:40PM

hackergotchi for Michal Čihař

Michal Čihař

Changed Debian repository signing key

After getting complains from apt and users, I've finally decided to upgrade signing key on my Debian repository to something more decent that DSA. If you are using that repository, you will now have to fetch new key to make it work again.

The old DSA key was there really because my laziness as I didn't want users to reimport the key, but I think it's really good that apt started to complain about it (it doesn't complain about DSA itself, but rather on using SHA1 signatures, which is most you can get out of DSA key).

Anyway the new key ID is DCE7B04E7C6E3CD9 and fingerprint is 4732 8C5E CD1A 3840 0419 1F24 DCE7 B04E 7C6E 3CD9. It's signed by my GPG key, so you can verify it this way. Of course instruction on my Debian repository page have been updated as well.

Filed under: Debian English | 2 comments

12 May, 2016 07:10AM

Petter Reinholdtsen

Debian now with ZFS on Linux included

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

12 May, 2016 05:30AM

May 11, 2016

Elena 'valhalla' Grandi

GnuPG Crowdfunding and sticker

GnuPG Crowdfunding and sticker

I've just received my laptop sticker from the GnuPG crowdfund http://goteo.org/project/gnupg-new-website-and-infrastructure: it is of excellent quality, but comes with HOWTO-like detailed instructions to apply it in the proper way.

This strikes me as oddly appropriate.

#gnupg

11 May, 2016 12:22PM by Elena ``of Valhalla''