December 11, 2017

hackergotchi for Joey Hess

Joey Hess

two holiday stories

Two stories of something nice coming out of something not-so-nice for the holidays.

Story the first: The Gift That Kept on Giving

I have a Patreon account that is a significant chunk of my funding to do what I do. Patreon has really pissed off a lot of people this week, and people are leaving it in droves. My Patreon funding is down 25%.

This is an opportunity for Liberapay, which is run by a nonprofit, and avoids Patreon's excessive fees, and is free software to boot. So now I have a Liberapay account and have diversified my sustainable funding some more, although only half of the people I lost from Patreon have moved over. A few others have found other ways to donate to me, including snail mail and Paypal, and others I'll just lose. Thanks, Patreon..

Yesterday I realized I should check if anyone had decided to send me Bitcoin. Bitcoin donations are weird because noone ever tells me that they made them. Also because it's never clear if the motive is to get me invested in bitcoin or send me some money. I prefer not to be invested in risky currency speculation, preferring risks like "write free software without any clear way to get paid for it", so I always cash it out immediately.

I have not used bitcoin for a long time. I could see a long time ago that its development community was unhealthy, that there was going to be a messy fork and I didn't want the drama of that. My bitcoin wallet files were long deleted. Checking my address online, I saw that in fact two people had reacted to Patreon by sending bitcoin to me.

I checked some old notes to find the recovery seeds, and restored "hot wallet" and "cold wallet", not sure which was my public incoming wallet. Neither was, and after some concerned scrambling, I found the gpg locked file in a hidden basement subdirectory that let me access my public incoming wallet, and in fact two people had reacted to Patreon by sending bitcoin to me.

What of the other two wallets? "Hot wallet" was empty. But "cold wallet" turned out to be some long forgotten wallet, and yes, this is now a story about "some long forgotten bitcoin wallet" -- you know where this is going right?

Yeah, well it didn't have a life changing amount of bitcoin in it, but it had a little almost-dust from a long-ago bitcoin donor, which at current crazy bitcoin prices, is enough that I may need to fill out a tax form now that I've sold it. And so I will be having a happy holidays, no matter how the Patreon implosion goes. But for sustainable funding going forward, I do hope that Liberapay works out.

Story the second: "a lil' positive end note does wonders"

I added this to the end of git-annex's bug report template on a whim two years ago:

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

That prompt turned out to be much more successful than I had expected, and so I want to pass the gift of the idea on to you. Consider adding something like that to your project's bug report template.

It really works: I'll see a bug report be lost and confused and discouraged, and keep reading to make sure I see whatever nice thing there might be at the end. It's not just about meaningless politeness either, it's about getting an impression about whether the user is having any success at all, and how experienced they are in general, which is important in understanding where a bug report is coming from.

I've learned more from it than I have from most other interactions with git-annex users, including the git-annex user surveys. Out of 217 bug reports that used this template, 182 answered the question. Here are some of my favorite answers.

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

  • I do! I wouldn't even have my job, if it wasn't for git-annex. ;-)

  • Yeah, it works great! If not for it I would not have noticed this data corruption until it was too late.

  • Indeed. All my stuff (around 3.5 terabytes) is stored in git-annex with at least three copies of each file on different disks and locations, spread over various hard disks, memory sticks, servers and you name it. Unused disk space is a waste, so I fill everything up to the brim with extra copies.

    In other words, Git-Annex and I are very happy together, and I'd like to marry it. And because you are the father, I hereby respectfully ask for your blessing.

  • Yes, git with git annex has revolutionised my scientific project file organisation and thats why I want to improve it.

  • <3 <3 <3

  • We use git-annex for our open-source FreeSurfer software and find very helpful indeed. Thank you. https://surfer.nmr.mgh.harvard.edu/

  • Yes I have! I've used it manage lots of video editing disks before, and am now migrating several slightly different copies of 15TB sized documentary footage from random USB3 disks and LTO tapes to a RAID server with BTRFS.

  • Oh yeah! This software is awesome. After getting used to having "dummy" shortcuts to content I don't currently have, with the simple ability to get/drop that content, I can't believe I haven't seen this anywhere before. If there is anything more impressive than this software, it's the support it has had from Joey over all this time. I'd have pulled my hair out long ago. :P

  • kinda

  • Yep, works apart from the few tests that fail.

  • Not yet, but I'm excited to make it work!

  • Roses are red
    Violets are blue
    git-annex is awesome
    and so are you
    ;-)
    But bloody hell, it's hard to get this thing to build.

  • git-annex is awesome, I lean on it heavily nearly every single day.

  • I have a couple of repositories atm, one with my music, another that backs up our family pictures for the family and uses Amazon S3 as a backup.

  • Yes! It's by far one of my favorite apps! it works very well on my laptop, on my home file server, and on my internal storage on my Android phone :)

  • Yes! I've been using git-annex quite a bit over the past year, for everything from my music collection to my personal files. Using it for a not-for-profit too. Even trying to get some Mac and Windows users to use it for our HOA's files.

  • I use git-annex for everything. I've got 10 repositories and around 2.5TB of data in those repos which in turn is synced all over the place. It's excellent.

  • Really nice tool. Thanks Joey!

  • Git-annex rocks !!!!

  • I'd love to say I have. You'll hear my shout of joy when I do.

  • Mixed bag, works when it works, but I've had quite a few "unexplained" happenings. Perservering for now, hoping me reporting bugs will see things improve...

  • Yes !!! I'm moving all my files into my annex. It is very robust; whenever something is wrong there is always some other copy somewhere that can be used.

  • Yes! git annex has been enormously helpful. Thanks so much for this tool.

  • Oh yes! I love git-annex :) I've written the hubiC special remote for git-annex, the zsh completion, contributed to the crowdfunding campaigns, and I'm a supporter on Patreon :)

  • Yes, managing 30000 files, on operating systems other than Windows though...

  • Of course ;) All the time

  • I trust Git Annex to keep hundreds of GB of data safe, and it has never failed me - despite my best efforts

  • Oh yeah, I am still discovering this powerfull git annex tool. In fact, collegues and I are forming a group during the process to exchange about different use cases, encountered problems and help each other.

  • I love the metadata functionality so much that I wrote a gui for metadata operations and discovered this bug.

  • Sure, it works marvels :-) Also what I was trying to do is perhaps not by the book...

  • Oh, yes. It rules. :) One of the most important programs I use because I have all my valuable stuff in it. My files have never been safer.

  • I'm an extremely regular user of git-annex on OS X and Linux, for several years, using it as a podcatcher and to manage most of my "large file" media. It's one of those "couldn't live without" tools. Thanks for writing it.

  • Yes, I've been using git annex for I think a year and a half now, on several repositories. It works pretty well. I have a total of around 315GB and 23K annexed keys across them (counting each annex only once, even though they're cloned on a bunch of machines).

  • I only find (what I think are) bugs because I use it and I use it because I like it. I like it because it works (except for when I find actual bugs :]).

  • I'm new to git-annex and immediately astonished by its unique strength.

  • As mentioned before, I am very, very happy with git-annex :-) Discovery of 2015 for me.

  • git-annex is great and revolutionized my file organization and backup structure (if they were even existing before)

  • That’s just a little hiccup in, up to now, various months of merry use! ;-)

  • Yes. Love it. Donated. Have been using it for years. Recommend it and get(/force) my collaborators to use it. ;-)

  • git-annex is an essential building block in my digital life style!

  • Well, git-annex is wonderful!

A lil' positive end note turned into a big one, eh? :)

11 December, 2017 09:04PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Systemd, Devuan, and Debian

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc
Priority: standard
Standards-Version: <whatever was there>

Package: your-sanity
Essential: yes
Conflicts: systemd-sysv
Description: Make sure this system does not install what I don't want
 The packages in the Conflicts: header cannot be installed without
 very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

11 December, 2017 01:00PM

François Marier

Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
global
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

11 December, 2017 02:03AM

December 10, 2017

Andrew Cater

Debian 8.10 and Debian 9.3 released - CDs and DVDs published

Done a tiny bit of testing for this: Sledge and RattusRattus and others have done far more.

Always makes me feel good: always makes me feel as if Debian is progressing - and I'm always amazed I can persuade my oldest 32 bit laptop to work :)

10 December, 2017 05:27PM by Andrew Cater (noreply@blogger.com)

December 09, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#12: Know and Customize your OS and Work Environment

Welcome to the twelveth post in the randomly relevant R recommendations series, or R4 for short. This post will insert a short diversion into what was planned as a sequence of posts on faster installations that started recently with this post but we will resume to it very shortly (for various definitions of "very" or "shortly").

Earlier today Davis Vaughn posted a tweet about a blog post of his describing a (term) paper he wrote modeling bitcoin volatilty using Alexios's excellent rugarch package---and all that typeset with the styling James and I put together in our little pinp package which is indeed very suitable for such tasks of writing (R)Markdown + LaTeX + R code combinations conveniently in a single source file.

Leaving aside the need to celebreate a term paper with a blog post and tweet, pinp is indeed very nice and deserving of some additional exposure and tutorials. Now, Davis sets out to do all this inside RStudio---as folks these days seem to like to limit themselves to a single tool or paradigm. Older and wiser users prefer the flexibility of switching tools and approaches, but alas, we digress. While Davis manages of course to do all this in RStudio which is indeed rather powerful and therefore rightly beloved, he closes on

I wish there was some way to have Live Rendering like with blogdown so that I could just keep a rendered version of the paper up and have it reload every time I save. That would be the dream!

and I can only add a forceful: Fear not, young man, for we can help thou!

Modern operating systems have support for epoll and libnotify, which can be used from the shell. Just how your pdf application refreshes automagically when a pdf file is updated, we can hook into this from the shell to actually create the pdf when the (R)Markdown file is updated. I am going to use a tool readily available on my Linux systems; macOS will surely have something similar. The entr command takes one or more file names supplied on stdin and executes a command when one of them changes. Handy for invoking make whenever one of your header or source files changes, and useable here. E.g. the last markdown file I was working on was named comments.md and contained comments to a referee, and we can auto-process it on each save via

echo comments.md | entr render.r comments.md

which uses render.r from littler (new release soon too...; a simple Rscript -e 'rmarkdown::render("comments.md")' would probably work too but render.r is shorter and little more powerful so I use it more often myself) on the input file comments.md which also happens to (here sole) file being monitored.

And that is really all there is to it. I wanted / needed something like this a few months ago at work too, and may have used an inotify-based tool there but cannot find any notes. Python has something similar via watchdog which is yet again more complicated / general.

It turns out that auto-processing is actually not that helpful as we often save before an expression is complete, leading to needless error messages. So at the end of the day, I often do something much simpler. My preferred editor has a standard interface to 'building': pressing C-x c loads a command (it recalls) that defaults to make -k (i.e., make with error skipping). Simply replacing that with render.r comments.md (in this case) means we get an updated pdf file when we want with a simple customizable command / key-combination.

So in sum: it is worth customizing your environments, learning about what your OS may have, and looking beyond a single tool / editor / approach. Even dreams may come true ...

Postscriptum: And Davis takes this in a stride and almost immediately tweeted a follow-up with a nice screen capture mp4 movie showing that entr does indeed work just as well on his macbook.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 December, 2017 10:30PM

Craig Small

WordPress 4.9.1

After a much longer than expected break due to moving and the resulting lack of Internet, plus WordPress releasing a package with a non-free file, the Debian package for WordPress 4.9.1 has been uploaded!

WordPress 4.9 has a number of improvements, especially around the customiser components so that looked pretty slick. The editor for the customiser now has a series of linters what will warn if you write something bad, which is a very good thing! Unfortunately the Javascript linter is jshint which uses a non-free license which that team is attempting to fix.  I have also reported the problem to WordPress upstream to have a look at.

While this was all going on, there were 4 security issues found in WordPress which resulted in the 4.9.1 release.

Finally I got the time to look into the jshint problem and Internet to actually download the upstream files and upload the Debian packages. So version 4.9.1-1 of the packages have now been uploaded and should be in the mirrors soon.  I’ll start looking at the 4.9.1 patches to see what is relevant for Stretch and Jessie.

09 December, 2017 06:23AM by Craig

December 08, 2017

Back Online

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

08 December, 2017 10:58AM by Craig

December 07, 2017

hackergotchi for Thomas Goirand

Thomas Goirand

Testing OpenStack using tempest: all is packaged, try it yourself

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live
cd live
openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git
cd openstack-meta-packages/src
./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

07 December, 2017 11:00PM by Goirand Thomas

hackergotchi for Chris Lamb

Chris Lamb

Simple media cachebusting with GitHub pages

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel)
$ touch _config.yml
$ mkdir _layouts
$ echo '{{ content }}' > _layouts/default.html
$ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

---
layout: default
---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \
    xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to http://127.0.0.1:3000/.


Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}">
<script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})">

07 December, 2017 10:10PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Thoughts on AlphaZero

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

07 December, 2017 07:35PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Three Minimalism reads

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

07 December, 2017 04:26PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Adding subtitles with FFmpeg

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \
  -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0   \
  language=eng -metadata:s:s:1 language=nld -c copy -y           \
  foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.

07 December, 2017 12:52PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.8.300.1.0

armadillo image

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)

  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 December, 2017 12:59AM

December 06, 2017

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in November 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).

Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.

Misc

  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

06 December, 2017 06:33PM by Apo

December 05, 2017

Renata D'Avila

Creating a blog with pelican and Github pages

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com   (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first, misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about? Write here.

This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Screenshot of the blog home. It has a header with the title Renata\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

05 December, 2017 10:30PM by Renata

hackergotchi for Keith Packard

Keith Packard

nuttx-scheme

Scheme For NuttX

Last fall, I built a tiny lisp interpreter for AltOS. That was fun, but had some constraints:

  • Yet another lisp-like language
  • Ran only on AltOS, not exactly a widely used operating system.

To fix the first problem, I decided to try and just implement scheme. The language I had implemented wasn't far off; it had lexical scoping and call-with-current-continuation after all. The rest is pretty simple stuff.

To fix the second problem, I ported the interpreter to NuttX. NuttX is a modest operating system for 8 to 32-bit microcontrollers with a growing community of developers.

I downloaded the most recent Scheme spec, the Revised⁷ Report, which is the 'small language' follow on to the contentious Revised⁶ Report.

Converting ao-lisp to ao-scheme

Reading through the spec, it was clear there were a few things I needed to deal with to provide something that looked like scheme:

  • quasiquote
  • syntax-rules
  • function names
  • boolean type

Quasiquote turned out to be fun -- the spec described it in terms of a set of list forms, so I hacked up the reader to convert the convenient syntax using ` , and ,@ into lists and wrote a macro to emit construction code from the generated lists.

Syntax-rules is a 'nicer' way to write macros, and I haven't implemented it yet. There's nothing it can do which the underlying full macros cannot, so I'm planning on just writing it in scheme rather than having a pile more C code.

Scheme as a separate boolean type, rather than using the empty list, (), for false, it uses #f and has everything else be 'true'. Adding that wasn't hard, just tedious as I had to work through any place that used boolean values and switch it to using #f or #t.

There were also a pile of random function name swaps and another bunch of new functions to write.

All in all, not a huge amount of work, and now the language looks a lot more like scheme.

Adding more number types

The original language had only integers, and those were only 14 bits wide. To make the language a bit more usable, I added 24 bit integers as well, along with 32-bit floats. Then I added automatic promotion between representations and the usual scheme tests for exactness. This added a bit to the footprint, and maybe I should make it optional.

Porting to NuttX

This was the easiest piece of the process -- NuttX offers a posix-like API, just like AltOS. Getting it to build was actually a piece of cake. The only part requiring new code was the lack of any command line editing or echo -- I ended up using readline to get that to work.

I was pleased that all of the language changes I made didn't significantly impact the footprint of the resulting system. I built NuttX for the stm32f4-discovery board, compiling in basic and then comparing with and without scheme:

Before:

$ size nuttx
   text    data     bss     dec     hex filename
 183037     172    4176  187385   2dbf9 nuttx

After:

$ size nuttx
   text    data     bss     dec     hex filename
 213197     188   22672  236057   39a19 nuttx

The increase in text includes 11kB of built-in lisp code, so that when the interpreter starts, you already have all of the necessary lisp code loaded that turns the bare interpreter into a full scheme system. That makes the core interpreter around 20kB of code, which is nice and compact (at least for scheme, I think).

The BSS space includes the heap; this can be set to any size you like. It would probably be good to have that allocated on the fly instead of used even when the interpreter isn't running.

Where's the Code

I've pushed the code here:

$ git clone git://keithp.com/git/apps

Future Work

There's more work to complete the language support; here's some tasks needing attention at some point:

  • No vectors or bytevectors
  • Characters are just numbers
  • No dynamic-wind or exceptions
  • No environments
  • No ports
  • No syntax-rules
  • No record types
  • No libraries
  • Heap allocated from BSS

A Sample Application!

Here's towers of hanoi in scheme for nuttx:

;
; Towers of Hanoi
;
; Copyright © 2016 Keith Packard <keithp@keithp.com>
;
; This program is free software; you can redistribute it and/or modify
; it under the terms of the GNU General Public License as published by
; the Free Software Foundation, either version 2 of the License, or
; (at your option) any later version.
;
; This program is distributed in the hope that it will be useful, but
; WITHOUT ANY WARRANTY; without even the implied warranty of
; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
; General Public License for more details.
;

                    ; ANSI control sequences

(define (move-to col row)
  (for-each display (list "\033[" row ";" col "H"))
  )

(define (clear)
  (display "\033[2J")
  )

(define (display-string x y str)
  (move-to x y)
  (display str)
  )

(define (make-piece num max)
                    ; A piece for position 'num'
                    ; is num + 1 + num stars
                    ; centered in a field of max *
                    ; 2 + 1 characters with spaces
                    ; on either side. This way,
                    ; every piece is the same
                    ; number of characters

  (define (chars n c)
    (if (zero? n) ""
      (+ c (chars (- n 1) c))
      )
    )
  (+ (chars (- max num 1) " ")
     (chars (+ (* num 2) 1) "*")
     (chars (- max num 1) " ")
     )
  )

(define (make-pieces max)
                    ; Make a list of numbers from 0 to max-1
  (define (nums cur max)
    (if (= cur max) ()
      (cons cur (nums (+ cur 1) max))
      )
    )
                    ; Create a list of pieces

  (map (lambda (x) (make-piece x max)) (nums 0 max))
  )

                    ; Here's all of the towers of pieces
                    ; This is generated when the program is run

(define towers ())

                    ; position of the bottom of
                    ; the stacks set at runtime
(define bottom-y 0)
(define left-x 0)

(define move-delay 25)

                    ; Display one tower, clearing any
                    ; space above it

(define (display-tower x y clear tower)
  (cond ((= 0 clear)
     (cond ((not (null? tower))
        (display-string x y (car tower))
        (display-tower x (+ y 1) 0 (cdr tower))
        )
           )
     )
    (else 
     (display-string x y "                    ")
     (display-tower x (+ y 1) (- clear 1) tower)
     )
    )
  )

                    ; Position of the top of the tower on the screen
                    ; Shorter towers start further down the screen

(define (tower-pos tower)
  (- bottom-y (length tower))
  )

                    ; Display all of the towers, spaced 20 columns apart

(define (display-towers x towers)
  (cond ((not (null? towers))
     (display-tower x 0 (tower-pos (car towers)) (car towers))
     (display-towers (+ x 20) (cdr towers)))
    )
  )

                    ; Display all of the towers, then move the cursor
                    ; out of the way and flush the output

(define (display-hanoi)
  (display-towers left-x towers)
  (move-to 1 23)
  (flush-output)
  (delay move-delay)
  )

                    ; Reset towers to the starting state, with
                    ; all of the pieces in the first tower and the
                    ; other two empty

(define (reset-towers len)
  (set! towers (list (make-pieces len) () ()))
  (set! bottom-y (+ len 3))
  )

                    ; Move a piece from the top of one tower
                    ; to the top of another

(define (move-piece from to)

                    ; references to the cons holding the two towers

  (define from-tower (list-tail towers from))
  (define to-tower (list-tail towers to))

                    ; stick the car of from-tower onto to-tower

  (set-car! to-tower (cons (caar from-tower) (car to-tower)))

                    ; remove the car of from-tower

  (set-car! from-tower (cdar from-tower))
  )

                    ; The implementation of the game

(define (_hanoi n from to use)
  (cond ((= 1 n)
     (move-piece from to)
     (display-hanoi)
     )
    (else
     (_hanoi (- n 1) from use to)
     (_hanoi 1 from to use)
     (_hanoi (- n 1) use to from)
     )
    )
  )

                    ; A pretty interface which
                    ; resets the state of the game,
                    ; clears the screen and runs
                    ; the program

(define (hanoi len)
  (reset-towers len)
  (clear)
  (display-hanoi)
  (_hanoi len 0 1 2)
  #t
  )

05 December, 2017 10:25PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

back on the Linux desktop

As forecast, I've switched from Mac back to Linux on the Desktop. I'm using a work-supplied Thinkpad T470s which is a nice form-factor machine (the the T450s was the first Thinkpad to widen my perspective away from just looking at the X series).

I've installed Debian to get started and ended up with GNOME 3 as the desktop (I was surprised to not be prompted for a choice in the installer, but on reflection that makes sense, I did a non-networked installed from the GNOME-flavour of the live DVD). So for the time being I'm going to stick to GNOME 3 and see what's new/better/worse than last time, but once my replacement SSD arrives I can revisit.

I haven't made much progress on the sticking points I identified in my last post. I'm hoping to get 1pass up and running in the interim to read my 1Password DB so I can get by until I've found a replacement password manager that I like.

Most of my desktop configuration steps I have captured in some Ansible playbooks. I'm looking at Ansible after a long break from using puppet, and there's things I like and things I don't. I've also been exploring ownCloud for personal file sharing and despite a couple of warning signs (urgh PHP, official Debian package was dropped) I'm finding it really useful, in particular for sharing stuff with family. I might write more about both of those later.

05 December, 2017 03:35PM

hackergotchi for Joachim Breitner

Joachim Breitner

Finding bugs in Haskell code by proving it

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
  where
    go _ [] = []
    go [] _ = []
    go (i1:is1) (i2:is2)
        -- reorder for symmetry
        | to i1 < to i2 = go (i2:is2) (i1:is1)
        -- disjoint
        | from i1 >= to i2 = go (i1:is1) is2
        -- subset
        | to i1 == to i2 = I f' (to i2) : go is1 is2
        -- overlapping
        | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
      where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
  match is with
    | [] => True
    | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
   end.

Definition good is := match is with
  ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z :=
  (fun z => (f <= z)%Z /\ (z < t)%Z).

Definition semI (i : Interval) : Ensemble Z :=
  match i with I f t => range f t end.

Fixpoint semLIs (is : list Interval) : Ensemble Z :=
  match is with
    | [] => Empty_set Z
    | (i :: is) => Union Z (semI i) (semLIs is)
end.

Definition sem is := match is with
  ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals),
  good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.

Lemma intersection_spec : forall (is1 is2 : Intervals),
  good is1 -> good is2 ->
  sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool :=
  match is1, is2 with
    | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
    | _, _ => false
  end.

Definition size2 (is1 is2 : list Interval) : nat :=
  (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

05 December, 2017 02:17PM by Joachim Breitner (mail@joachim-breitner.de)

Reproducible builds folks

Reproducible Builds: Weekly report #136

Here's what happened in the Reproducible Builds effort between Sunday, November 26 and Saturday, December 2, 2017:

Media coverage

Arch Linux imap key leakage

A security issue was found in the imap package in Arch Linux thanks to the reproducible builds effort in that distribution.

Due to a hardcoded key-generation routine in the build() step of imap's PKGBUILD (the standard packaging file for Arch Linux packages), a default secret key was generated and leaked on all imap installations. This was prompty reviewed, confirmed and fixed by the package maintainers.

This mirrors similar security issues found in Debian, such as #833885.

Debian packages reviewed and fixed, and bugs filed

In addition, 73 FTBFS bugs were detected and reported by Adrian Bunk.

Reviews of unreproducible Debian packages

83 package reviews have been added, 41 have been updated and 33 have been removed in this week, adding to our knowledge about identified issues.

1 issue type was updated:

LEDE / OpenWrt packages updates:

diffoscope development

reprotest development

Version 0.7.4 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks as well as new ones from:

reproducible-website development

tests.reproducible-builds.org

Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres-Arias, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 December, 2017 02:10PM

Petter Reinholdtsen

Is the short movie «Empty Socks» from 1927 in the public domain or not?

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

05 December, 2017 11:30AM

December 04, 2017

hackergotchi for Thomas Lange

Thomas Lange

FAI.me build server improvements

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.

FAI.me

04 December, 2017 08:59PM

hackergotchi for Joey Hess

Joey Hess

new old thing

closeup of red knothole, purple-pink and yellow wood grain

This branch came from a cedar tree overhanging my driveway.

It was fun to bust this open and shape it with hammer and chisels. My dad once recommended learning to chisel before learning any power tools for wood working.. so I suppose this is a start.

roughly split wood branch

Some tung oil and drilling later, and I'm very pleased to have a nice place to hang my cast iron.

slightly curved wood beam with cast iron pots hung from it

04 December, 2017 08:50PM

hackergotchi for Holger Levsen

Holger Levsen

20171204-qubes-mirage-firewall

On using QubesOS MirageOS firewall

So I'm lucky to attend the 4th MirageOS hack retreat in Marrakesh this week, where I learned to build and use qubes-mirage-firewall, which is a MirageOS based (system) firewall for Qubes OS. The main visible effect is that this unikernel only needs 32 megabytes of memory, while a Debian (or Fedora) based firewall systems needs half a gigabyte. It's also said to be more secure, but I have not verified that myself ;-)

In the spirit of avoiding overhead I decided not to build with docker as the qubes-mirage-firewall's README.md suggests, but rather use a base Debian stretch system. Here's how to build natively:

sudo apt install git opam aspcud curl debianutils m4 ncurses-dev perl pkg-config time

git clone https://github.com/talex5/qubes-mirage-firewall
cd qubes-mirage-firewall/
opam init
# the next line is super useful if there is bad internet connectivity but you happen to have access to a local mirror
# opam repo add local http://10.0.0.2:8080
opam switch 4.04.2
eval `opam config env`
## in there:
opam install -y vchan xen-gnt mirage-xen-ocaml mirage-xen-minios io-page mirage-xen mirage mirage-nat mirage-qubes netchannel
mirage configure -t xen
make depend
make tar

Then follow the instructions in the README.md and switch some AppVMs to it, and then make it the default and shutdown the old firewall, if you are happy with the results, which currently I'm not sure I am because it doesn't allow updating template VMs...

Update: qubes-mirage-firewall allows this. Just the crashed qubes-updates-proxy service in sys-net prevented it, but that's another bug elsewhere.

I also learned that it builds reproducibly given the same build path and ignoring the issue of timestamps in the generated tarball, IOW, the unikernel (and the 3 other files) inside the tarball is reproducible. And I still need to compare a docker build with a build done the above way & and I really don't like having to edit the firewalls rules.ml file and then rebuilding it. More on this in another post later, hopefully.

Oh, I didn't mention it and won't say more here, but this hack retreat and it's organisation is marvellous! Many thanks to everyone here!

04 December, 2017 02:37PM

December 03, 2017

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

My Free Software Activities in November 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

03 December, 2017 05:52PM by Raphaël Hertzog

Noah Meyerhans

On the demise of Linux Journal

Lwn, Slashdot, and many others have marked the recent announcement of Linux Journal's demise. I'll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.

I think it's probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it's really not an exaggeration to say that the Internet as we know it wouldn't exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.

I was in college in the late 90's, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I'll never forget the look of disbelief on a company exec's face as he asked "$3000 on a Linux system?"

Throughout this period, on my lunch breaks from work, I'd swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually were using Linux to do real work. An article entitled "Linux Sinks the Titanic" described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie Titanic. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.

The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it's pretty much the dominant OS on the planet. When Linux Journal reported in the late 90's that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it's no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it's quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that's less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.

It's been a long time since I paid attention to Linux Journal, so from a practical point of view I can't honestly say that I'll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can't help but worry that there's somebody out there who might thrive in the free software community if only they had the right door open before them.

03 December, 2017 02:54AM

December 02, 2017

hackergotchi for Thomas Goirand

Thomas Goirand

There’s cloud, and it can even be YOURS on YOUR computer

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The openstack.org market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

02 December, 2017 10:09PM by Goirand Thomas

hackergotchi for Steve Kemp

Steve Kemp

BlogSpam.net repository cleanup, and email-changes.

I've shuffled around all the repositories which are associated with the blogspam service, such that they're all in the same place and refer to each other correctly:

Otherwise I've done a bit of tidying up on virtual machines, and I'm just about to drop the use of qpsmtpd for handling my email. I've used the (perl-based) qpsmtpd project for many years, and documented how my system works in a "book":

I'll be switching to pure exim4-based setup later today, and we'll see what that does. So far today I've received over five thousand spam emails:

  steve@ssh /spam/today $ find . -type f | wc -l
  5731

Looking more closely though over half of these rejections are "dictionary attacks", so they're not SPAM I'd see if I dropped the qpsmtpd-layer. Here's a sample log entry (for a mail that was both rejected at SMTP-time by qpsmtpd and archived to disc in case of error):

   {"from":"<clzzgiadqb@ics.uci.edu>",
    "helo":"adrian-monk-v3.ics.uci.edu",
    "reason":"Mail for juha not accepted at steve.fi",
    "filename":"1512284907.P26574M119173Q0.ssh.steve.org.uk.steve.fi",
    "subject":"Viagra Professional. Beyond compare. Buy at our shop.",
    "ip":"2a00:6d40:60:814e::1",
    "message-id":"<p65NxDXNOo1b.cdD3s73osVDDQ@ics.uci.edu>",
    "recipient":"juha@steve.fi",
    "host":"Unknown"}

I suspect that with procmail piping to crm114, and a beefed up spam-checking configuration for exim4 I'll not see a significant difference and I'll have removed something non-standard. For what it is worth over 75% of the remaining junk which was rejected at SMTP-time has been rejected via DNS-blacklists. So again exim4 will take care of that for me.

If it turns out that I'm getting inundated with junk-mail I'll revert this, but I suspect that it'll all be fine.

02 December, 2017 10:00PM

Thorsten Alteholz

My Debian Activities in November 2017

FTP master

As you might have read elsewhere, I am no longer an FTP assistant. I am very delighted about my new delegation as FTP master.

So this month I almost doubled the number of accepted packages to 385 packages and rejected 60 uploads. The overall number of packages that got accepted this month was 448.

Debian LTS

This was my forty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13h. During that time I did LTS uploads of:

  • [DLA 1188-1] libxml2 security update one CVE
  • [DLA 1191-1] python-werkzeug security update one CVE
  • [DLA 1192-1] libofx security update two CVEs
  • [DLA 1195-1] curl security update one CVE
  • [DLA 1194-1] libxml2 security update two CVEs

I also took care of an rsync issue and continued to work on wireshark.

Other stuff

During November I uploaded new upstream versions of …

I also did uploads of …

  • openoverlayrouter to change the source package Section: and fix some problems in Ubuntu
  • duktape to not only provide a shared library but also a pkg-config file
  • astronomical-almanac to make Helmut happy and fix a FTCBFS where he also provided the patch

Last month I wrote about apcupsd as the DOPOM of October. Unfortunately in November was the next power outage due to some malfunction in a transformer station. I never would have guessed that such a malfunction can do so much harm within the power grid. Anyway, the power was back after 31 minutes and my batteries would have lasted 34 minutes before turning off all computer. At least my spec was correct :-).

The DOPOM for this month has been dateutils.

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

Last but not least I sponsored the upload of evqueue-core.

02 December, 2017 04:55PM by alteholz

hackergotchi for Martin-&#201;ric Racine

Martin-Éric Racine

dbus, rsyslogd, systemd: Which one is the culprit?

I have been facing this issue since a few weeks on testing. For many weeks, it prevented upgrading dbus to the latest version that trickled to Testing. Having manually force-installed dbus via the Recovery Mode's shell, I then ran into this issue:


This is a nasty one, since it also prevents performing a clean poweroff. That systemd-journald line about getting a timeout while attempting to connect to the Watchdog keeps on showing ad infinitum.

What am I missing?

02 December, 2017 03:03PM by Martin-Éric (noreply@blogger.com)

December 01, 2017

hackergotchi for Daniel Pocock

Daniel Pocock

Hacking with posters and stickers

The FIXME.ch hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

01 December, 2017 08:27PM by Daniel.Pocock

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, November 2017

I was assigned 13 hours of work by Freexian's Debian LTS initiative and carried over 4 hours from September. I worked all 17 hours.

I prepared and released two updates on the Linux 3.2 longterm stable branch (3.2.95, 3.2.96), but I didn't upload an update to Debian. However, I have rebased the Debian package on 3.2.96 and expect to make a new upload soon.

01 December, 2017 05:54PM

Mini-DebConf Cambridge 2017

Last week I attended Cambridge's annual mini-DebConf. It's slightly strange to visit a place one has lived in for a long time but which is no longer home. I joined Nattie in the 'video team house' which was rented for the whole week; I only went for four days.

I travelled down on Wednesday night, and spent a long time (rather longer than planned) on trains and in waiting rooms. I used this time to catch up on discussions about signing infrastructure for Secure Boot, explaining my concerns with the most recent proposal and proposing some changes that might alleviate those. Sorry to everyone who was waiting for that; I should have replied earlier.

On the Thursday and Friday I prepared for my talk, and had some conversations with Steve McIntyre and others about SB signing infrastructure. Nattie and Andy respectively organised group dinners at the Polish club on Thursday and a curry house on Friday, both of which I enjoyed.

The mini-DebConf proper took place on the Saturday and Sunday, and I presented my now annual talk on "What's new in the Linux kernel". As usual, the video team did a fine job of recording and publishing video of the talks.

01 December, 2017 05:51PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was trying to get selenium up and running.

I was trying to get selenium up and running. I wanted to try chrome headless and one that seemed to be usable seemed to be selenium but that didn't just work out of the box on Debian apt-get installed binary. hmm.

01 December, 2017 05:20PM by Junichi Uekawa

hackergotchi for James McCoy

James McCoy

Monthly FLOSS activity - 2017/11 edition

Debian

vim

  • Uploaded version 2:8.0.1257-1 to fix a flaky test and the debsources syntax highlighting
    • This round of builds revealed some more flaky tests, so I sent PR#2282 to make the tests more deterministic.
  • Uploaded version 2:8.0.1257-2

neovim

  • Uploaded versions 0.2.1-1 and 0.2.1-1
    • There were various test failures, which thankfully boiled down to a few common issues
      • Lua appears to be stricter than LuaJIT when formatting nil
      • A few potentially flaky screen tests
      • Garbage being formatted into an error string
  • Uploaded version 0.2.1-3 to fix the test failures
  • Uploaded version 0.2.2-1

libvterm

  • Uploaded version 0~bzr715-1
    • This is needed by the new pangoterm revision to report focus events

pangoterm

  • Uploaded version 0~bzr607-1
    • Focus reporting can now be enabled by applications
    • The -T switch, an alias for --title, is supported for full compliance with the requirements of an x-terminal-emulator alternative

serf

  • Uploaded version 1.3.9-4
    • Cleaned up the packaging
    • Marked a symbol as public since svn has been using it for 10 years, which let me drop a patch from svn's packaging

Subversion

  • Changed how swig is invoked so there are explicit mechanisms for passing general or language-specific options when generating the bindings. No longer are CPPFLAGS being (ab)used to handle this, so the whack-a-mole game of stripping unrecognized switches was also removed. (r1816254)
  • Since there was a bit of development activity at the hackathon and talk of another pre-release for 1.10, I performed a Coverity scan of trunk. I did a quick scrub of some of the new and existing issues, one of which I proposed for a backport to 1.9.x.

neovim

01 December, 2017 05:02AM

Paul Wise

FLOSS Activities November 2017

Changes

Issues

Review

Administration

  • hexchat-addons: merged pull request
  • Debian: fix LDAP issue, reenable webserver on a VM, redirect users to support channels
  • Debian wiki: whitelist email addresses
  • Debian website: check on a build issue

Communication

  • Invite Slax to the Debian derivatives census

Sponsors

All work was done on a volunteer basis.

01 December, 2017 02:44AM

November 30, 2017

Antoine Beaupré

November 2017 report: LTS, standard disclosure, Monkeysphere in Python, flash fraud and Goodbye Drupal

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I didn't do as much as I wanted this month so a bunch of hours are reported to next month. I got frustrated by two difficult packages: exiv2 and libreoffice.

Exiv

For Exiv2 I first reported the issues upstream as requested in the original CVE assignment. Then I went to see if I could reproduce the issue. Valgrind didn't find anything, so I went on to test the new ASAN instructions that tell us how to build for ASAN in LTS. Turns out that I couldn't make that work either. Fortunately, Roberto was able to build the package properly and confirmed the wheezy version as non-vulnerable, so I marked the three CVEs as not-affected and moved on.

Libreoffice

Next up was LibreOffice. I started backporting the patches to wheezy which was a little difficult because any error in the backport takes hours to find because LibreOffice is so big. The monster takes about 4 hours to build on my i3-6100U processor - I can't imagine how long that would take on a slower machine. Still, I managed to get patches that mostly builds. I say mostly because while most of the code builds, the tests fail to build. Not only do they fail to build, but they even segfault the linker. At that point, I had already spent too many hours working on this frustrating loop of "work/build-wait/crash" that I gave up.

I also worked on reproducing a supposed regression associated with the last security update. Somehow, I couldn't reproduce either - the description of the regression was very limited and all suggested approaches failed to trigger the problems described.

OptiPNG

Finally, a little candy: an easy backport of a simple 2-line patch for a simple program, OptiPNG that, ironically, had a vulnerability (CVE-2017-16938) in GIF parsing. I could do hundreds of those breezy updates, they are fun and simple, and easy to test. This resulted in the trivial DLA-1196-1.

Miscellaneous

LibreOffice stretched the limits of my development environment. I had to figure out how to deal with out of space conditions in the build tree (/build) something that is really not obvious in sbuild. I ended up documenting that in a new troubleshooting section in the wiki.

Other free software work

feed2exec

I pushed forward with the development of my programmable feed reader, feed2exec. Last month I mentioned I released the 0.6.0 beta: since then 4 more releases were published, and we are now at the 0.10.0 beta. This added a bunch new features:

  • wayback plugin to save feed items to the Wayback Machine on archive.org
  • archive plugin to save feed items to the local filesystem
  • transmission plugin to save RSS Torrent feeds to the Transmission torrent client
  • vast expansion of the documentation, now hosted on ReadTheDocs. The design was detailed with a tour of the source code and detailed plugin writing instructions were added to the documentation, also shipped as a feed2exec-plugins manpage.
  • major cleanup and refactoring of the codebase, including standard locations for the configuration files, which moved

The documentation deserves special mention. If you compare between version 0.6 and the latest version you can see 4 new sections:

  • Plugins - extensive documentation on plugins use, the design of the plugin system and a full tutorial on how to write new plugins. the tutorial was written while writing the archive plugin, which was written as an example plugin just for that purpose and should be readable by novice programmers
  • Support - a handy guide on how to get technical support for the project, copied over from the Monkeysign project.
  • Code of conduct - was originally part of the contribution guide. the idea then was to force people to read the Code when they wanted to contribute, but it wasn't a good idea. The contribution page was overloaded and critical parts were hidden down in the page, after what is essentially boilerplate text. Inversely, the Code was itself hidden in the contribution guide. Now it is clearly visible from the top and trolls will see this is an ethical community.
  • Contact - another idea from the Monkeysign project. became essential when the security contact was added (see below).

All those changes were backported in the ecdysis template documentation and I hope to backport them back into my other projects eventually. As part of my documentation work, I also drifted into the Sphinx project itself and submitted a patch to make manpage references clickable as well.

I now use feed2exec to archive new posts on my website to the internet archive, which means I have an ad-hoc offsite backup of all content I post online. I think that's pretty neat. I also leverage the Linkchecker program to look for dead links in new articles published on the site. This is possible thanks to a Ikiwiki-specific filter to extract links to changed files from the Recent Changes RSS feed.

I'm considering making the parse step of the program pluggable. This is an idea I had in mind for a while, but it didn't make sense until recently. I described the project and someone said "oh that's like IFTTT", a tool I wasn't really aware of, which connects various "feeds" (Twitter, Facebook, RSS) to each other, using triggers. The key concept here is that feed2exec could be made to read from Twitter or other feeds, like IFTTT and not just write to them. This could allow users to bridge social networks by writing only to a single one and broadcasting to the other ones.

Unfortunately, this means adding a lot of interface code and I do not have a strong use case for this just yet. Furthermore, it may mean switching from a "cron" design to a more interactive, interrupt-driven design that would run continuously and wake up on certain triggers.

Maybe that could come in a 2.0 release. For now, I'll see to it that the current codebase is solid. I'll release a 0.11 release candidate shortly, which has seen a major refactoring since 0.10. I again welcome beta testers and users to report their issues. I am happy to report I got and fixed my first bug report on this project this month.

Towards standard security disclosure guidelines

When reading the excellent State of Opensource Security report, some metrics caught my eye:

  • 75% of vulnerabilities are not discovered by the maintainer

  • 79.5% of maintainers said that they had no public-facing disclosure policy in place

  • 21% of maintainers who do not have a public disclosure policy have been notified privately about a vulnerability

  • 73% of maintainers who do have a public disclosure policy have been notified privately about a vulnerability

In other words, having a public disclosure policy more than triples your chances of being notified of a security vulnerability. I was also surprised to find that 4 out of 5 projects do not have such a page. Then I realized that none of my projects had such a page, so I decided to fix that and fix my documentation templates (the infamously named ecdysis project) to specifically include a section on security issues.

I found that the HackerOne disclosure guidelines were pretty good, except they require having a bounty for disclosure. I understand it's part of their business model, but I have no such money to give out - in fact, I don't even pay myself for the work of developing the project, so I don't quite see why I would pay for disclosures.

I also found that many projects include OpenPGP key fingerprints in their contact information. I find that's a little silly: project documentation is no place to offer OpenPGP key discovery. If security researchers cannot find and verify OpenPGP key fingerprints, I would be worried about their capabilities. Adding a fingerprint or key material is just bound to create outdated documentation when maintainers rotate. Instead, I encourage people to use proper key discovery mechanism like the Web of trust, WKD or obviously TOFU which is basically what publishing a fingerprint does anyways.

Git-Mediawiki

After being granted access to the Git-Mediawiki project last month, I got to work. I fought hard with both Travis and Git, and Perl, and MediaWiki, to add continuous integration in the repository. It turns out the MediaWiki remote doesn't support newer versions of MediaWiki because the authentication system changed radically. Even though there is supposed to be backwards-compatibility, it doesn't seem to really work in my cases, which means that any test case that requires logging into the wiki fails. This also required using an external test suite instead of the one provided by Git, which insists on building Git before being used at all. I ended up using the Sharness project and submitted a few tests that were missing.

I also:

I also opened a discussion regarding the versioning scheme and started tracking issues that would be part of the next release. I encourage people to get involved in this project if they are interested: the door is wide open for contributors, of course.

Rewriting Monkeysphere in Python?

After being told one too many times that Monkeysphere doesn't support elliptic curve algorithms, I finally documented the issue and looked into solutions. It turns out that a key part of Monkeysphere is a Perl program written using a library that doesn't support ECC itself, which makes this problem very hard to solve. So I looked into the PGPy project to see if it could be useful and it turns out that ECC support already exists - the only problem is the specific curve GnuPG uses, ed25519, was missing. The author fixed that but the fix requires a development version of OpenSSL, because even the 1.1 release doesn't support that curve.

I nevertheless went ahead to see how hard it would be to re-implement that key component of Monkeysphere in Python, and ended up with monkeypy, an analysis of the Monkeysphere design and first prototype of a Python-based OpenPGP / SSH conversion tool. This lead to a flurry of issues on the PGPy project to fix UTF-8 support, easier revocation checks and a few more. I also reviewed 9 pull requests that were pending in the repository. A key piece missing is keyserver interaction and I made a first prototype of this as well.

It was interesting to review Monkeysphere's design. I hope to write more about this in the future, especially about how it could be improved, and find the time to do this re-implementation which could mean wider adoption of this excellent project.

Goodbye, Drupal

I finally abandoned all my Drupal projects, except the Aegir-related ones, which I somewhat still feel attached to, even though I do not do any Drupal development at all anymore. I do not use Drupal anywhere (unless as a visitor on some websites maybe?), I do not manage Drupal or Aegir sites anymore, and in fact, I have completely lost interested in writing anything remotely close to PHP.

So it was time to go. I have been involved with Drupal for a long time: I registered on Drupal.org in March 2001, almost 17 years ago. I was active for about 12 years on the site: I made my first post in 2003 which showed I was already "hacking core" which back then was the way to go. My first commit, to the mailman_api project, was also in 2003, and I kept contributing until my last commit in 2015, on the Aegir project of course.

That is probably the longest time I have spent on any software project, with the notable exception of the Debian project. I had a lot of fun working with Drupal: I met amazing people, traveled all over the world and made powerful websites that shook the world. Unfortunately, the Drupal project has decided to go in a direction I cannot support: Drupal 8 is a monster that is beyond anything needed for me, my friends or the organizations I support. It may be very well for the startups and corporations, to create large sites, but it seems it completely lost touch with its roots. Drupal used to be a simple tool for "community plumbing" (ref), it is now a behemoth to "Launch, manage, and scale ambitious digital experiences—with the flexibility to build great websites or push beyond the browser" (ref).

When I heard a friend had his small artist website made in Drupal, my first questions were "which version?" and "who takes care of the upgrades?" Those questions still don't seem to be resolved in the community: a whole part of the project is still stuck in the old Drupal 7 world, years after the first D8 alpha releases. Drupal 8 doesn't address the fundamental issue of upgradability and cost: Drupal sites are bigger than they ever were. The Drupal community seems to be following a very different way of thinking (notice the Amazon Echo device there) than what I hope the future of the Internet will be.

Goodbye, Drupal - it's been a great decade, but I'm already gone and I'm not missing you at all.

Fixing fraudulent memory cards

I finally got around testing the interesting f3 project with a real use case, which allowed me to update the stressant documentation with actual real-world results. A friend told me the memory card he put in his phone had trouble with some pictures he was taking with the phone: they would show up as "gray" when he would open them, yet the thumbnail looked good.

It turns out his memory card wasn't really 32GB as advertised, but really 16GB. Any write sent to the card above that 16GB barrier was just discarded and reads were returned as garbage. But thanks to the f3 project it was possible to fix that card so he could use it normally.

In passing, I worked on the f3 project to merge the website with the main documentation, again using the Sphinx project to generate more complete documentation. The old website is still up, but hopefully a redirection will be installed soon.

Miscellaneous

  • quick linkchecker patch, still haven't found the time to make a real release

  • tested the goaccess program to inspect web server logs and get some visibility on the traffic to this blog. found two limitations: some graphs look weird and would benefit from a logarithmic scale and it would be very useful to have plain text reports. otherwise this makes for a nice replacement for the aging analog program I am currently using, especially as a sysadmin quick diagnostic tool. for now I use this recipe to send a HTML report by email every week

  • found out about the mpd/subsonic bridge and sent a few issues about problems I found. turns out the project is abandoned and this prompted the author to archive the repository. too bad...

  • uploaded a new version of etckeeper into Debian, and started an upstream discussion about what to do with the pile of patches waiting there

  • uploaded a new version of the also aging smokeping in Debian as well to fix a silly RC bug about the rename dependency

  • GitHub also says I "created 7 repositories", "54 commits in 8 repositories", "24 pull requests in 11 repositories", "reviewed 19 pull requests in 9 repositories", and "opened 23 issues in 14 repositories".

  • blocked a bunch of dumb SMTP bots attacking my server using this technique. this gets rid of those messages I was getting many times per second in the worst times:

    lost connection after AUTH from unknown[180.121.131.155]
    
  • happily noticed that fail2ban gained the possibility of incremental ban times in the upcoming version 0.11, which i documented on Serverfault.com. unfortunately, the Debian package is out of date for now...

  • tested Firefox Quantum with great success: all my extensions are supported and it is much, much faster than it was. i do miss the tab list drop-down which somehow disappeared and I also miss a Debian package. I am also concerned about the maintainability of Rust in Debian, but we'll see how that goes.

I spent only 30% of this month on paid work.

30 November, 2017 10:33PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in November 2017

Here is my monthly update covering what I have been doing in the free software world in November 2017 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Presented at the Open Compliance Summit 2017 in Yokohama, Japan and had many follow-up conversations regarding using reproducible builds as a way of ensuring the long-term sustainability of civil infrastructure.
  • Created pull requests upstream for fswatch, bitz-server, stetl, nbsphinx & stardicter.
  • Updated diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, to only parse DTB's version number, not any -dirty suffix. (#880279)
  • Expanded the documentation for disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, highlighting the non-intuitive recommendation to sort instead of shuffle. [...]
  • Made some brief changes to buildinfo.debian.net, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them:
    • Add a by-hash API endpoint. [...]
    • Support ?key__uid=X&key__uid=Y filtering. [...]
  • Updated our website:
    • Move the "contribute" page from the Debian wiki to /contribute/. [...]
    • Add a (redirecting) /docs/source-date-epoch/ page so we have a canonical URL. [...]
    • Add recent talks to Resources page. [...]
    • Cachebust CSS files. [...]
  • In Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Made some changes to: jenkins.debian.net which uns our comprehensive testing framework:
    • Ignore "warning" strings in commit messages causing builds to be marked as unstable. [...]
    • Update the email subject of status change mails away from Debian-specific URI. [...]
    • Move some IRC announcements to #debian-reproducible-changes. [...]
  • Worked on publishing our weekly reports. (#132, #133, #134 & #135)


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed

  • dget: Please support downloading packages over gopher://. (#880649)
  • gpaw: Incorrectly creates logging files called - instead of logging to standard output. (#882638)
  • pk4: Please avoid the use of avail in package descriptions. (#881343)

Debian LTS


This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1161-1 for the redis key-value storage database to fix cross-protocol scripting attack.
  • Issued DLA 1162-1 & DLA 1163-1 to fix out-of-bounds memory vulnerabilites in apr and apr-util, portability libraries for various Apache applications.
  • Issued DLA 1173-1 for procmail, a tool used to sort incoming mail into various directories and filter out spam messages to fix a heap-based buffer overflow.
  • Issued DLA 1174-1 to correct a denial of service vulnerability in the konversation IRC client related to parsing of color formatting codes.
  • Issued DLA 1175-1 for the lynx-cur web browser, preventing a use-after-free vulnerability in the HTML parser which could lead to memory/information disclosure.

Uploads

  • python-django:
  • redis:
    • 4.0.2-6 — Correct locations of redis-sentinel pidfiles. (#880980)
    • 4.0.2-7 — Add a redis metapackage. (#876475)
    • 4.0.2-8 — Use get_current_dir_name over a PATHMAX, etc. (#881684), don't rely on taskset existing for kFreeBSD-* (#881683), drop "memory efficiency" tests on advice from upstream (#881682) and allow the package be bin-NMUable.
    • 4.0.2-9 — Modify aof.c for MAXPATHLEN issues. (#881684)
    • 4.0.2-9~bpo9+1 — Upload to stretch-backports.
  • bfs:
    • 1.1.4-1 — New upstream release.
    • 1.1.4-2 — Use upstream's new manpage.
  • python-daiquiri:
    • 1.3.0-2 — Ensure all dependencies are available for DEP-8 tests. (#882876)
  • redisearch (0.90.0~alpha1-1, 0.90.1-1, 0.99.0-1 & 0.99.2-1) — New upstream releases.

Finally, I also made a non-maintainer upload (NMU) of cpio (2.12+dfsg-5) to the experimental distribution.


Debian bugs filed

  • cappuccino: Broken symlink in /usr/games. (#880714)
  • statsmodels: Accesses raw.github.com during build. (#882641)
  • python-lti: Please run the upstream testsuite. (#880834)
  • git-buildpackage: gbp dch needs a better workflow description. (#880552)
  • audacity: New upstream release. (#880717)
  • python-djangorestframework: New upstream release. (#880538)
  • djangorestframework: New upstream release. (#880558)

I also filed 2 FTBFS bugs against django-axes & plinth.


30 November, 2017 05:51PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

distribution-wide projects in Debian

A few weeks ago, Lars and I had a discussion: wouldn't it be nice if, across the whole of Debian, applications stopped writing to arbitrary dot-files in your home directory (e.g. ~/.someapp) and instead abided by the XDG Base Directory Specification?

Assuming that there was Debian-wide consensus that this was a good idea, in theory it could be achieved. The main problem would be that many of the upstream authors of the software we package would not accept the change. Consequently, Debian would be left carrying the patches.

We generally try to remain as close to upstream's code as possible and shy away from carrying too many patches in too many packages. The ideal lifecycle for a patch is for it to be accepted upstream. Patches are a burden for packagers, and we don't have enough packagers or packager time (or both).

Contrast this to OpenBSD. I know very little about the BSDs, and what little I do know I've mostly gleaned from the excellent blog posts by Ted Unangst, but my understanding is that when they import some software into their main project, their mental model appears to be closer to that of a fork of the upstream software than the way Debian (and most Linux distributions) operate. The entire OpenBSD Operating System is in a single (CVS!) version control repository. When some software is imported to the core Operating System, that software's source is copied into place within the tree and from that point forward is managed as part of the whole.

(OpenBSD also have a separate source code collection called "ports", which is managed in a different manner which is much closer to a Linux distribution's notion of packaging, but I won't cover that further here.)

If they decided that a distribution-wide project was worthy, they would have no hesitation in making that change across all the software in their Operating System. Their focus is on consistency and they seem to maintain that focus rather than thinking about packages in relative isolation.

Although I do appreciate the practical problem of Debian managing a lot of patches, I am sometimes envious of OpenBSD's approach and corresponding perspective. Although I really do not miss CVS.

Thanks to Ryan Freeman for proof-reading this blog post. The remaining errors are mine.

30 November, 2017 04:36PM

Enrico Zini

Qt Creator cross-platform development in Stretch: chroot automation

I wrote a tool to automate the creation and maintenance of Qt cross-build environments built from packages in Debian Stretch.

It allows to:

  • Do cross-architecture development with Qt Creator, including remote debugging on the target architecture
  • Compile using native compilers and cross-compilers, to avoid running the compilers inside an emulator making the build slower
  • Leverage all of Debian as development environment, using existing Debian packages for cross-build-dependencies

Getting started

# Creates an armhf environment under the current directory
$ sudo cbqt ./armhf --create --verbose
2017-11-30 14:09:23,715 INFO armhf: Creating /home/enrico/lavori/truelite/system/cross/armhf
…
2017-11-30 14:14:49,887 INFO armhf: Configuring cross-build environment

# Get information about an existing chroot.
# Note: the output is machine-parsable yaml
$ cbqt ./armhf --info
name: armhf
path: ./armhf
arch: armhf
arch_triplet: arm-linux-gnueabihf
exists: true
configured: true
issues: []

# Create a qmake wrapper for this environment
$ sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf

# Install the build-depends you need
# Note: :arch is added automatically to package names if no arch is explicitly
#       specified
$ sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev

Building packages

To build a package, use the qmake wrapper generated with cbqt --qmake instead of the normal qmake:

$ qmake-armhf -makefile
$ make
arm-linux-gnueabihf-g++ … -I../armhf/usr/include/arm-linux-gnueabihf/… -I../armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf -o browser.o browser.cpp
…
/home/enrico/…/armhf/usr/bin/moc …
…
arm-linux-gnueabihf-g++ … -Wl,-rpath-link,/home/enrico/…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,/home/enrico/…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,/home/enrico/…/armhf/usr/lib/ -o program browser.o … -L/home/enrico/…/armhf/usr/lib/arm-linux-gnueabihf …

Building in Qt Creator

Configure a new Kit in Qt Creator:

  1. Tools/Options, then Build & Run, then Kits, then Add
  2. Name: armhf (or anything you like)
  3. Qt version: choose the one autodetected from /usr/local/bin/qmake-armhf
  4. In Compilers, add a GCC C compiler with path arm-linux-gnueabihf-gcc, and a GCC C++ compiler with path arm-linux-gnueabihf-g++
  5. Choose the newly created compilers in the kit
  6. Dismiss the dialog with "OK": the new kit is ready

Now you can choose the default kit to build and run locally, and the armhf kit for remote cross-development.

Where to get it

Here!

Credits

This has been done as part of my work with Truelite.

30 November, 2017 08:35AM

November 29, 2017

Reproducible builds folks

Reproducible Builds: Weekly report #135

Here's what happened in the Reproducible Builds effort between Sunday November 19 and Saturday November 25 2017:

Upcoming events

Reproducible Builds will have an assembly at 34c3, the "Galactic Congress". ;-) Currently we are discussing to informally meet there every day at 13:37 UTC.

Reproducible Arch Linux

Since November 23 2017, Arch Linux is again being continuously tested for reproducibility. However, this time a patched pacman is being used which can create reproducible packages. After 4 days of testing, 18% of all packages in the core, extra, multilib and community Arch repos has been tested, with these — very preliminary — results:

  • core: 77.1% reproducible, all 197 packages tested.
  • extra: 75.2% reproducible, 514 packages (of 2250 total) tested.
  • multilib: 82.6% reproducible, all 259 packages tested.
  • community: 76.5% reproducible, 487 packages (of 7739 total) tested.

Jelle van der Waa also wrote a blog post explaining more details detailing how this already had lead to more QA work in Arch.

So all in all, it looks like 77.2% of the tested Arch Linux packages are now reproducible! With an unreleased pacman version and without some variations we apply when testing Debian… still this is a very good start! Kudos to all involved.

Packages reviewed and fixed, and bugs filed

Patches filed upstream:

  • Bernhard M. Wiedemann:
  • Chris Lamb:
    • gpaw - (merged) embedded logging output
    • bitz-server (merged) - build path

Patches filed in Debian:

Patches filed in OpenSUSE:

Reviews of unreproducible packages

97 package reviews have been added, 56 have been updated and 42 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (62)
  • Gilles Filippini (1)
  • Gregor Riepl (1)
  • James Cowgill (1)
  • Laurent Bigonville (1)
  • Matthias Klose (1)
  • Sylvestre Ledru (2)
  • gregor herrmann (1)

reproducible-faketools

  • reproducible-faketools 0.3.10 was released with support for:
    • Reduced randomness (/dev/random and urandom are actually /dev/zero)
    • Disabled ASLR and
    • Building with fixed PIDs.
    • Also the tar wrapper script got a bug fix.

reprotest development

reproducible-website development

tests.reproducible-builds.org

  • anthraxx worked on reproducible Arch Linux (19 commits)
  • Holger Levsen did some work on reproducible Debian:
    • aa9ce22d6 - Update email subject of status change mails to use t.r-b.o/debian - thanks to lamby for #882186
  • Holger mostly worked on reproducible Arch Linux that week (56 commits).
  • Misc tests.r-b.o work by Holger:
    • 0d79ab54a - reproducible Fedora: be explicit that this is stalled atm
    • Holger also reviewed and deployed 25 commits from other people.
    • Finally, Holger moved IRC notifications for jenkins.debian.net from #debian-reproducible to #reproducible-builds (and kept them on #debian-qa as well).
  • Johannes Löthberg worked on Arch Linux as well (2 commits)
  • kpcyrd also worked on Arch Linux (5 commits)

Finally there was discussion to how to generalise the database schema for supporting several projects, triggered by the recent work on reproducible Arch, but also previously discussed in the context of openSUSE, LEDE and FreeBSD.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

29 November, 2017 08:45PM

Enrico Zini

Qt Creator cross-platform development in Stretch: chroots

My next step in creating a cross-platform Qt development environment is trying to set it up on a chroot and make it usable from Qt Creator, so that both buil-dependencies and cross-build-dependencies can be available even when they are not coinstallable.

Building a chroot

I built a chroot to host the armhf build-dependencies:

$ sudo cdebootstrap stretch arm-linux-gnueabihf
$ sudo systemd-nspawn -D arm-linux-gnueabihf
# dpkg --add-architecture armhf
# apt update
# apt install qtbase5-dev qtbase5-dev-tools qtbase5-dev:armhf

The cross-compilers need to be outside the chroot. I tried to use cross-compilers installed inside the chroot while building outside the chroot, and they fail to link at runtime, since they expect their shared libraries to be in /usr.

I put this qt.conf in the chroot:

[Paths]
Prefix=/home/enrico/…/arm-linux-gnueabihf/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations
TargetSpec=arm-linux-gnueabihf

I added the custom mkspecs to the chroot:

$ cd arm-linux-gnueabihf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/
$ sudo cp -r linux-g++ arm-linux-gnueabihf
$ sudo vi arm-linux-gnueabihf/qmake.conf

This is the content of usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf/qmake.conf:

#
# qmake configuration for arm-linux-gnueabihf
#

MAKEFILE_GENERATOR      = UNIX
CONFIG                 += incremental
QMAKE_INCREMENTAL_STYLE = sublib

include(../common/linux.conf)
include(../common/gcc-base-unix.conf)
include(../common/g++-unix.conf)

QMAKE_RPATHLINKDIR += /home/enrico/…/arm-linux-gnueabihf/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += /home/enrico/…/arm-linux-gnueabihf/usr/lib/arm-linux-gnueabihf
QMAKE_RPATHLINKDIR += /home/enrico/…/arm-linux-gnueabihf/usr/lib/

QMAKE_COMPILER          = arm-linux-gnueabihf-gcc

QMAKE_CC                = arm-linux-gnueabihf-gcc

QMAKE_LINK_C            = $$QMAKE_CC
QMAKE_LINK_C_SHLIB      = $$QMAKE_CC

QMAKE_CXX               = arm-linux-gnueabihf-g++

QMAKE_LINK              = $$QMAKE_CXX
QMAKE_LINK_SHLIB        = $$QMAKE_CXX

load(qt_config)

Note QMAKE_RPATHLINKDIR, which was not present in the previous post: since linking needs to happen against libraries in /home/enrico/…/arm-linux-gnueabihf/…, we need to tell the linker not to go and search in /.

The extra QMAKE_RPATHLINKDIR pointing to usr/lib is a workaround for libsrtp0 installing files outside multiarch directories (#765173):

# dpkg -L libsrtp0 | grep usr/lib
/usr/lib
/usr/lib/libsrtp.so.0.0
/usr/lib/libsrtp.so.0

(libsrtp0 is a dependency of libqt5webenginecore5)

I changed the wrapper /usr/local/bin/qmake-arm-linux-gnueabihf to point to the chroot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/usr/bin/python3

import sys, os

QT_CONFIG = "/home/enrico/…/arm-linux-gnueabihf/qt.conf"

argv0 = os.path.join(os.path.dirname(sys.argv[0]), "qmake")

if len(sys.argv) == 1:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:])
else:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:] + ["-qtconf", QT_CONFIG])

And it all works. Even the test application that requires QtWebEngine builds and links fine.

Credits

This has been done as part of my work with Truelite.

29 November, 2017 02:53PM

Mike Gabriel

Call for Translations: Arctica Greeter and Ayatana Indicators

This is a quick call for help to all non-English native speakers.

Please visit projects hosted by the Arctica Project and the Ayatana Indicators project on Weblate and help localizing our projects into your native language.

Projects waiting for Your Language Expertise

The projects on Weblate are:

Arctica Project:
https://hosted.weblate.org/projects/arctica-framework/

Ayatana Indicators:
https://hosted.weblate.org/projects/ayatana-indicators/

If interested in helping with localizations for these project, please add your language for these projects to your Hosted Weblate Dashboard and stay informed when changes occur, components get added, etc.

Credits

Thanks to all those who already have contributed with translation, so far. However, more work is needed. Let's come together!!!

light+love
Mike Gabriel

29 November, 2017 02:24PM by sunweaver

Enrico Zini

Qt Creator cross-platform development in Stretch: consolidation

Time to consolidate the exploration done yesterday.

qmake -qtconf

Looking at the work done by the Qt/KDE team I found that there is no need to rebuild qmake, as it can be configured by using the -qtconf option.

The work recently done on debhelper provides a starting point that I can build on.

This qtconf makes qmake use the right paths, including running rcc, moc and uic from /usr/bin/, removing the need for the dirty hack:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations

It still needs overriding the compiler settings, to make it use a compiler different than g++:

qmake -makefile -qtconf /tmp/qmake.conf QMAKE_CC=arm-linux-gnueabihf-gcc QMAKE_CXX=arm-linux-gnueabihf-g++ QMAKE_LINK=arm-linux-gnueabihf-g++

But then -m64 creeps in and breaks the cross-build (this has been fixed since qtbase 5.9.1+dfsg-12. but my target is stretch):

$ make
arm-linux-gnueabihf-g++ -c -m64 -pipe -std=c++11 -O2 -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DQT_NO_DEBUG -DQT_SVG_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I. -isystem /usr/include/arm-linux-gnueabihf/qt5 -isystem /usr/include/arm-linux-gnueabihf/qt5/QtSvg -isystem /usr/include/arm-linux-gnueabihf/qt5/QtWidgets -isystem /usr/include/arm-linux-gnueabihf/qt5/QtGui -isystem /usr/include/arm-linux-gnueabihf/qt5/QtCore -I. -I/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/linux-g++-64 -o main.o main.cpp
arm-linux-gnueabihf-g++: error: unrecognized command line option ‘-m64’
Makefile:401: recipe for target 'main.o' failed
make: *** [main.o] Error 1

Where does it come from? I found that I could run qmake -query:

$ qmake -query -qtconf /tmp/qmake.conf
QT_SYSROOT:
QT_INSTALL_PREFIX:/usr
QT_INSTALL_ARCHDATA:/usr/lib/arm-linux-gnueabihf/qt5
QT_INSTALL_DATA:/usr/share/qt5
QT_INSTALL_DOCS:/usr/share/qt5/doc
QT_INSTALL_HEADERS:/usr/include/arm-linux-gnueabihf/qt5
QT_INSTALL_LIBS:/usr/lib/arm-linux-gnueabihf
QT_INSTALL_LIBEXECS:/usr/lib/arm-linux-gnueabihf/qt5/libexec
QT_INSTALL_BINS:/usr/lib/qt5/bin
QT_INSTALL_TESTS:/usr/tests
QT_INSTALL_PLUGINS:/usr/lib/arm-linux-gnueabihf/qt5/plugins
QT_INSTALL_IMPORTS:/usr/lib/arm-linux-gnueabihf/qt5/imports
QT_INSTALL_QML:/usr/lib/arm-linux-gnueabihf/qt5/qml
QT_INSTALL_TRANSLATIONS:/usr/share/qt5/translations
QT_INSTALL_CONFIGURATION:/etc/xdg
QT_INSTALL_EXAMPLES:/usr/lib/arm-linux-gnueabihf/qt5/examples
QT_INSTALL_DEMOS:/usr/lib/arm-linux-gnueabihf/qt5/examples
QT_HOST_PREFIX:/usr
QT_HOST_DATA:/usr/lib/arm-linux-gnueabihf/qt5
QT_HOST_BINS:/usr/bin
QT_HOST_LIBS:/usr/lib/arm-linux-gnueabihf
QMAKE_SPEC:linux-g++-64
QMAKE_XSPEC:linux-g++-64
QMAKE_VERSION:3.0
QT_VERSION:5.7.1

I suppose that QMAKE_XSPEC should be linux-g++ instead of linux-g++-64

Poking through Qt's sources I found that I could also tweak TargetSpec, to change it to linux-g++ from its default of linux-g++-64:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations
TargetSpec=linux-g++

Now, I still need to override the compilers, but that's all I need to do to get a build:

$ qmake -makefile -qtconf /tmp/qmake.conf QMAKE_CC=arm-linux-gnueabihf-gcc QMAKE_CXX=arm-linux-gnueabihf-g++ QMAKE_LINK=arm-linux-gnueabihf-g++
$ make
…
$ file usr/bin/program
usr/bin/program: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=541e472a29a2b309af2fea40d9aa3439e6616bdd, not stripped

Can I get rid of needing to override the compilers? Let's try poking with TargetSpec:

# cd /usr/lib/arm-linux-gnueabihf/qt5/mkspecs/
# cp -r linux-g++ arm-linux-gnueabihf
# cd arm-linux-gnueabihf
# …tweak…
# cat qmake.conf
#
# qmake configuration for arm-linux-gnueabihf
#

MAKEFILE_GENERATOR      = UNIX
CONFIG                 += incremental
QMAKE_INCREMENTAL_STYLE = sublib

include(../common/linux.conf)
include(../common/gcc-base-unix.conf)
include(../common/g++-unix.conf)

QMAKE_COMPILER          = arm-linux-gnueabihf-gcc

QMAKE_CC                = arm-linux-gnueabihf-gcc

QMAKE_LINK_C            = $$QMAKE_CC
QMAKE_LINK_C_SHLIB      = $$QMAKE_CC

QMAKE_CXX               = arm-linux-gnueabihf-g++

QMAKE_LINK              = $$QMAKE_CXX
QMAKE_LINK_SHLIB        = $$QMAKE_CXX

load(qt_config)

Now, using arm-linux-gnueabihf as TargetSpec:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations
TargetSpec=arm-linux-gnueabihf

It finally works:

$ qmake -makefile -qtconf /tmp/qmake.conf
$ make
$ file usr/bin/program
usr/bin/program: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=541e472a29a2b309af2fea40d9aa3439e6616bdd, not stripped

Skipping prl file '/usr/lib/arm-linux-gnueabihf/libQt5PlatformSupport.prl', because it cannot be opened (No such file or directory).

Turn this into a kit for Qt Creator

I could not find a way to configure -qtconf in a kit in Qt Creator. Since Qt Creator detects versions of Qt by pointing at their qmake, I resorted to making a custom qmake wrapper:

#!/usr/bin/python3

# /usr/local/bin/qmake-arm-linux-gnueabihf

import sys, os

QT_CONFIG = "/home/enrico/qt-arm-linux-gnueabihf.conf"

argv0 = os.path.join(os.path.dirname(sys.argv[0]), "qmake")

if len(sys.argv) == 1:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:])
else:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:] + ["-qtconf", QT_CONFIG])

Just calling qmake -qtconf /home/enrico/qt-arm-linux-gnueabihf.conf "$@" is not enough: for some invocations of qmake, providing -qtconf gives an error:

$ qmake -qtconf /home/enrico/qt-arm-linux-gnueabihf.conf
qmake: could not find a Qt installation of 'conf'

If needed I can look at qmake's sources to see what is going on, but for now the wrapper above seems to be enough to cover all the needs of Qt Creator, and if the wrapper is in the path, Qt Creator manages to autodetect it on startup.

These are the details of a working arm-linux-gnueabihf kit for Qt Creator:

  • Device type: Generic Linux Device
  • Device: my hi-fi
  • Sysroot: blank
  • C compiler: /usr/bin/arm-linux-gnueabihf-gcc
  • C++ compiler: /usr/bin/arm-linux-gnueabihf-g++
  • Qt version: /usr/local/bin/qmake-arm-linux-gnueabihf

And with this, software cross-builds locally and can be deployed and tested remotely.

Summary of the situation so far

This is what is needed to do cross-build now:

  • Depend on crossbuild-essential-armhf
  • Install :armhf versions of Qt and all build-dependencies
  • Deploy a custom mkspec
  • Deploy a custom qt.conf
  • Deploy qmake wrapper
  • Configure a kit

No dirty hacks, everything can be shipped by a single .deb package.

Further things to investigate:

  • Find out if there is a way to package and install a kit for Qt Creator
  • Find an easy way to use a chroot as the cross-build environment, to be able to use cross-build-dependencies that cannot be coinstalled together with their native version

29 November, 2017 01:07PM

Qt Creator cross-platform development in Stretch: consolidation

Time to consolidate the exploration done yesterday.

qmake -qtconf

Looking at the work done by the Qt/KDE team I found that there is no need to rebuild qmake, as it can be configured by using the -qtconf option.

The work recently done on debhelper provides a starting point that I can build on.

This qtconf makes qmake use the right paths, including running rcc, moc and uic from /usr/bin/, removing the need for the dirty hack:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations

It still needs overriding the compiler settings, to make it use a compiler different than g++:

qmake -makefile -qtconf /tmp/qmake.conf QMAKE_CC=arm-linux-gnueabihf-gcc QMAKE_CXX=arm-linux-gnueabihf-g++ QMAKE_LINK=arm-linux-gnueabihf-g++

But then -m64 creeps in and breaks the cross-build (this has been fixed since qtbase 5.9.1+dfsg-12. but my target is stretch):

$ make
arm-linux-gnueabihf-g++ -c -m64 -pipe -std=c++11 -O2 -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DQT_NO_DEBUG -DQT_SVG_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I. -isystem /usr/include/arm-linux-gnueabihf/qt5 -isystem /usr/include/arm-linux-gnueabihf/qt5/QtSvg -isystem /usr/include/arm-linux-gnueabihf/qt5/QtWidgets -isystem /usr/include/arm-linux-gnueabihf/qt5/QtGui -isystem /usr/include/arm-linux-gnueabihf/qt5/QtCore -I. -I/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/linux-g++-64 -o main.o main.cpp
arm-linux-gnueabihf-g++: error: unrecognized command line option ‘-m64’
Makefile:401: recipe for target 'main.o' failed
make: *** [main.o] Error 1

Where does it come from? I found that I could run qmake -query:

$ qmake -query -qtconf /tmp/qmake.conf
QT_SYSROOT:
QT_INSTALL_PREFIX:/usr
QT_INSTALL_ARCHDATA:/usr/lib/arm-linux-gnueabihf/qt5
QT_INSTALL_DATA:/usr/share/qt5
QT_INSTALL_DOCS:/usr/share/qt5/doc
QT_INSTALL_HEADERS:/usr/include/arm-linux-gnueabihf/qt5
QT_INSTALL_LIBS:/usr/lib/arm-linux-gnueabihf
QT_INSTALL_LIBEXECS:/usr/lib/arm-linux-gnueabihf/qt5/libexec
QT_INSTALL_BINS:/usr/lib/qt5/bin
QT_INSTALL_TESTS:/usr/tests
QT_INSTALL_PLUGINS:/usr/lib/arm-linux-gnueabihf/qt5/plugins
QT_INSTALL_IMPORTS:/usr/lib/arm-linux-gnueabihf/qt5/imports
QT_INSTALL_QML:/usr/lib/arm-linux-gnueabihf/qt5/qml
QT_INSTALL_TRANSLATIONS:/usr/share/qt5/translations
QT_INSTALL_CONFIGURATION:/etc/xdg
QT_INSTALL_EXAMPLES:/usr/lib/arm-linux-gnueabihf/qt5/examples
QT_INSTALL_DEMOS:/usr/lib/arm-linux-gnueabihf/qt5/examples
QT_HOST_PREFIX:/usr
QT_HOST_DATA:/usr/lib/arm-linux-gnueabihf/qt5
QT_HOST_BINS:/usr/bin
QT_HOST_LIBS:/usr/lib/arm-linux-gnueabihf
QMAKE_SPEC:linux-g++-64
QMAKE_XSPEC:linux-g++-64
QMAKE_VERSION:3.0
QT_VERSION:5.7.1

I suppose that QMAKE_XSPEC should be linux-g++ instead of linux-g++-64

Poking through Qt's sources I found that I could also tweak TargetSpec, to change it to linux-g++ from its default of linux-g++-64:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations
TargetSpec=linux-g++

Now, I still need to override the compilers, but that's all I need to do to get a build:

$ qmake -makefile -qtconf /tmp/qmake.conf QMAKE_CC=arm-linux-gnueabihf-gcc QMAKE_CXX=arm-linux-gnueabihf-g++ QMAKE_LINK=arm-linux-gnueabihf-g++
$ make
…
$ file usr/bin/program
usr/bin/program: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=541e472a29a2b309af2fea40d9aa3439e6616bdd, not stripped

Can I get rid of needing to override the compilers? Let's try poking with TargetSpec:

# cd /usr/lib/arm-linux-gnueabihf/qt5/mkspecs/
# cp -r linux-g++ arm-linux-gnueabihf
# cd arm-linux-gnueabihf
# …tweak…
# cat qmake.conf
#
# qmake configuration for arm-linux-gnueabihf
#

MAKEFILE_GENERATOR      = UNIX
CONFIG                 += incremental
QMAKE_INCREMENTAL_STYLE = sublib

include(../common/linux.conf)
include(../common/gcc-base-unix.conf)
include(../common/g++-unix.conf)

QMAKE_COMPILER          = arm-linux-gnueabihf-gcc

QMAKE_CC                = arm-linux-gnueabihf-gcc

QMAKE_LINK_C            = $$QMAKE_CC
QMAKE_LINK_C_SHLIB      = $$QMAKE_CC

QMAKE_CXX               = arm-linux-gnueabihf-g++

QMAKE_LINK              = $$QMAKE_CXX
QMAKE_LINK_SHLIB        = $$QMAKE_CXX

load(qt_config)

Now, using arm-linux-gnueabihf as TargetSpec:

[Paths]
Prefix=/usr
ArchData=lib/arm-linux-gnueabihf/qt5
Binaries=lib/qt5/bin
Data=share/qt5
Documentation=share/qt5/doc
Examples=lib/arm-linux-gnueabihf/qt5/examples
Headers=include/arm-linux-gnueabihf/qt5
HostBinaries=bin
HostData=lib/arm-linux-gnueabihf/qt5
HostLibraries=lib/arm-linux-gnueabihf
Imports=lib/arm-linux-gnueabihf/qt5/imports
Libraries=lib/arm-linux-gnueabihf
LibraryExecutables=lib/arm-linux-gnueabihf/qt5/libexec
Plugins=lib/arm-linux-gnueabihf/qt5/plugins
Qml2Imports=lib/arm-linux-gnueabihf/qt5/qml
Settings=/etc/xdg
Translations=share/qt5/translations
TargetSpec=arm-linux-gnueabihf

It finally works:

$ qmake -makefile -qtconf /tmp/qmake.conf
$ make
$ file usr/bin/program
usr/bin/program: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=541e472a29a2b309af2fea40d9aa3439e6616bdd, not stripped

Skipping prl file '/usr/lib/arm-linux-gnueabihf/libQt5PlatformSupport.prl', because it cannot be opened (No such file or directory).

Turn this into a kit for Qt Creator

I could not find a way to configure -qtconf in a kit in Qt Creator. Since Qt Creator detects versions of Qt by pointing at their qmake, I resorted to making a custom qmake wrapper:

#!/usr/bin/python3

# /usr/local/bin/qmake-arm-linux-gnueabihf

import sys, os

QT_CONFIG = "/home/enrico/qt-arm-linux-gnueabihf.conf"

argv0 = os.path.join(os.path.dirname(sys.argv[0]), "qmake")

if len(sys.argv) == 1:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:])
else:
    os.execv("/usr/bin/qmake", [argv0] + sys.argv[1:] + ["-qtconf", QT_CONFIG])

Just calling qmake -qtconf /home/enrico/qt-arm-linux-gnueabihf.conf "$@" is not enough: for some invocations of qmake, providing -qtconf gives an error:

$ qmake -qtconf /home/enrico/qt-arm-linux-gnueabihf.conf
qmake: could not find a Qt installation of 'conf'

If needed I can look at qmake's sources to see what is going on, but for now the wrapper above seems to be enough to cover all the needs of Qt Creator, and if the wrapper is in the path, Qt Creator manages to autodetect it on startup.

These are the details of a working arm-linux-gnueabihf kit for Qt Creator:

  • Device type: Generic Linux Device
  • Device: my hi-fi
  • Sysroot: blank
  • C compiler: /usr/bin/arm-linux-gnueabihf-gcc
  • C++ compiler: /usr/bin/arm-linux-gnueabihf-g++
  • Qt version: /usr/local/bin/qmake-arm-linux-gnueabihf

And with this, software cross-builds locally and can be deployed and tested remotely.

Summary of the situation so far

This is what is needed to do cross-build now:

  • Depend on crossbuild-essential-armhf
  • Install :armhf versions of Qt and all build-dependencies
  • Deploy a custom mkspec
  • Deploy a custom qt.conf
  • Deploy qmake wrapper
  • Configure a kit

No dirty hacks, everything can be shipped by a single .deb package.

Further things to investigate:

  • Find out if there is a way to package and install a kit for Qt Creator
  • Find an easy way to use a chroot as the cross-build environment, to be able to use cross-build-dependencies that cannot be coinstalled together with their native version

Credits

This has been done as part of my work with Truelite.

29 November, 2017 01:07PM

François Marier

Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to seccdn.libravatar.org, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname, acme.libravatar.org, pointing to the main Libravatar server:

CNAME acme libravatar.org.

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerName __SECCDNSERVERNAME__
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
</VirtualHost>

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

#!/bin/sh
cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    ServerName acme.libravatar.org
    ServerAdmin webmaster@libravatar.org
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes
    </Directory>
</VirtualHost>

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d seccdn.libravatar.org

The resulting certificate will then be automatically renewed before it expires.

29 November, 2017 06:10AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 1250 CRAN packages

1250 Rcpp packages

Earlier today Rcpp passed 1250 reverse-dependencies on CRAN as another big milestone. The graph is on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July, 800 packages last October, 900 packages early January, and 1000 packages in April. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, nine percent mid-December 2016 and then cracked ten percent this summer.

1250 user packages is staggering. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. A decade ago CRAN itself did not have 1250 packages, and here we are approaching 12k with Rcpp at 10% and growing steadily. Amazeballs.

This puts a whole lot of responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

And with that, and as always, a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 November, 2017 01:23AM

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live and Biber updates

Halloween is over, Thanksgiving is over, snow is starting to fall here in Ishikawa, time for a new update of TeX Live (2017.20171128-1) – and this time also Biber (2.9-1) – in Debian.

As usual, biblatex and Biber updates go hand in hand, so here we are with the newest version of the next-generation bibliography management for LaTeX.

Other than this, nothing spectacular here but the usual flow of updates and new packages. Notably might be the IBM Plex font family (see the CTAN packages plex and plex-otf), which replaces Helvetica Neue on official IBM communications. A complete family with sans, serif, and mono in various weights.

Enjoy.

New packages

beamertheme-saintpetersburg, bxtexlogo, cm-mf-extra-bold, ctan-o-mat, gridslides, lyluatex, plex, plex-otf, simpleinvoice, xii-lat.

Updated packages

GS1, abnt, adobemapping, afm2pl, amsldoc-it, amstex, amsthdoc-it, archaeologie, arsclassica, babel, babel-georgian, baskervillef, beamer, beebe, beilstein, besjournals, bib2gls, biber, biblatex, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-fiwi, biblatex-manuscripts-philology, biblatex-oxref, biblatex-publist, biblatex-realauthor, biblatex-sbl, biblatex-shortfields, bibtexu, c90, ccicons, cjkutils, cm, cmexb, cns, context, context-filter, context-fullpage, context-letter, context-title, contracard, crossreftools, crossrefware, cslatex, csplain, ctex, cyrillic-bin, cyrplain, datatool, datetime2-german, datetime2-spanish, dnp, ducksay, dvipos, dynkin-diagrams, eplain, etoolbox, europasscv, euxm, exam, feyn, fontspec, fontware, genmisc, glossaries, glossaries-extra, glyphlist, graphics-def, guide-to-latex, gustlib, gustprog, hyphen-base, impatient-cn, ipaex, iscram, jadetex, jlreq, jmn, kantlipsum, kluwer, kpathsea, l3build, l3experimental, l3kernel, l3packages, lambda, latex-bin, latex2e-help-texinfo-spanish, latexconfig, latexmk, limecv, lollipop, lt3graph, luatex, luatexja, lwarp, manfnt-font, mathtools, metafont, mex, mfirstuc, mflua, mltex, mptopdf, musicography, musixtex, novel, octave, omegaware, otibet, overlays, pdfpages, pdfwin, pkgloader, platex, pst-circ, pst-fractal, pst-ghsb, pst-pdgr, pst-plot, pxrubrica, qpxqtx, reledmac, repere, revtex4, roex, scsnowman, siunitx, srbook-mem, sympytexpackage, synctex, tetex, tex4ht, texdoc, texsis, tikzducks, tlshell, ttfutils, tugboat, turabian-formatting, unicode-math, uplatex, vlna, web, witharrows, xcharter, xecjk, xetex, xetexconfig, xii, xmltex, xmltexconfig, ycbook, zebra-goodies.

29 November, 2017 01:02AM by Norbert Preining

November 28, 2017

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Experimental cross compiling Qt in Debian packages

Some time ago we the Qt/KDE team were contacted by Helmut Grohne. He was trying to cross compile Debian packages in general thanks to Ubuntu/Debian's multi-arch support, and he was having problems with Qt-based ones.

As far as we understand Qt upstreams only support cross compiling by having a toolchain for each pair of architectures involved. In Debian terms, and only considering current official architectures, that would mean building 90 cross toolchains. It clearly doesn't scale.

So we set up to discuss if somehow we could use multiarch to let debian packages using Qt to cross compile.

In the meantime Enrico Zini had the same idea. He wrote a nice summary of the situation at that time in his blog.

After many thinking some ideas were tested and we've got to the point of solving/hacking the issue. As this is not something directly supported by upstream you should take care, and file bugs whenever necessary.

Dmitry Schachnev from our team's side and Helmut from the debian-cross side worked a lot on it, and I would like to present what they have done. To be fair it's mostly described in our team's gobby qt-cross page, but I would like to give it some publicity in order to let people know about it and why not, find and help solving bugs.

General stuff

The first thing that was done was to move Qt binaries from their (Debian original) multi-arch path to a non multi-arch one, providing symlinks for compatibility. In this way the path of the binaries is the same for any arch (why they were not there is a long story, but nothing to worry now).

This move needed some other touches, like qtchooser being updated with the new paths.

The other changes where related to how we do our packaging:


  • All packages containing binaries are now M-A:foreign.
  • Some packages (qt3d, qtwayland) had binaries split to allow that.
  • qttools5-dev-tools now depends on libqt5sql5-sqlite (not uploaded yet)

qmake related changes

We also needed to address qmake. To begin with we splitted the package containing it into qt5-qmake-bin (M-A:foreign) and qt5-qmake (M-A:same). The first one has the binaries and the second the relevant mkspecs for some arch.

The rest of the "magic" comes from debhelper. It generates a qt.conf file with the right paths for each cross compilation and also passes cross QMAKE_CC and QMAKE_CXX to qmake when needed.


autotools

qt5-qmake will ship /usr/bin/$(DEB_HOST_GNU_TYPE)-qmake executable for use with AC_CHECK_TOOL (not uploaded yet).

There is still work to be done, but so far we have been able to cross compile packages using for example sbuild.

Edit 20171129 11:43 ARST: You should really look at the new Enrico's post.

28 November, 2017 07:35PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

Petter Reinholdtsen

Enrico Zini

Qt Creator cross-platform development in Stretch: exploration

I have been asked to create a system that makes it easy to do cross-architecture Qt development with Debian Stretch.

The goal is this:

  • Do development with Qt Creator, including remote debugging on the target architecture
  • Compile using native code, to avoid running the compilers inside an emulator making the build slower
  • Leverage all of Debian as development environment, using existing Debian packages for build-dependencies
  • Set everything up just by installing a single .deb package, which ideally gives Qt Creator a new kit that just works

I did some exploration on this some time ago, and the Qt/KDE and cross-build people did some work on this, too.

Now I'm trying to build on all that. I need to target Stretch so I cannot build on the recent improvements in Qt packaging, but I can at least use the experience that went into those changes.

I have two sample Qt application to try and cross-build, one that depends on a non-Qt library (libmosquittopp-dev), and one that depends on a nasty Qt library (qtwebengine5-dev).

This post has the notes from the first day of trying out different strategies.

It begins

I imported the two projects in Qt creator, installed their amd64 dependencies and make sure they build for the current system, with the default kit, no cross-building yet.

That works, good.

Now let's see about the rest:

dpkg --add-architecture armhf
apt install crossbuild-essential-armhf

A cross-build kit for Qt Creator

I created a new armhf kit for Qt Creator:

  • Device type: Generic Linux Device
  • Device: my hi-fi
  • Sysroot: blank
  • C compiler: /usr/bin/arm-linux-gnueabihf-gcc
  • C++ compiler: /usr/bin/arm-linux-gnueabihf-g++
  • Qt version: /usr/lib/arm-linux-gnueabihf/qt5/bin/qmake

I ran qmake from Qt Creator and go this:

10:59:49: Starting: "/usr/lib/arm-linux-gnueabihf/qt5/bin/qmake" /project.pro -spec linux-g++ CONFIG+=debug CONFIG+=qml_debug
sh: 1: /usr/lib/arm-linux-gnueabihf/qt5/bin/rcc: not found
10:59:50: The process "/usr/lib/arm-linux-gnueabihf/qt5/bin/qmake" exited normally.

rcc is provided by qtbase5-dev-tools, which cannot be coinstalled alongside other architectures' version of itself:

# apt install qtbase5-dev-tools:armhf
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  qtbase5-dev-tools
The following NEW packages will be installed:
  qtbase5-dev-tools:armhf
0 upgraded, 1 newly installed, 1 to remove and 0 not upgraded.
Need to get 651 kB of archives.
After this operation, 985 kB disk space will be freed.
Do you want to continue? [Y/n] q

I'll try with a very dirty hack:

# cd /usr/lib/arm-linux-gnueabihf/qt5/bin
# ln -s `which rcc` .
# ln -s `which uic` .
# ln -s `which moc` .
# ls -la
total 1944
drwxr-xr-x 2 root root    4096 Nov 29 11:04 .
drwxr-xr-x 7 root root    4096 Nov 28 17:05 ..
lrwxrwxrwx 1 root root      12 Nov 29 11:04 moc -> /usr/bin/moc
-rwxr-xr-x 1 root root 1982208 Jan 11  2017 qmake
lrwxrwxrwx 1 root root      12 Nov 29 11:04 rcc -> /usr/bin/rcc
lrwxrwxrwx 1 root root      12 Nov 29 11:04 uic -> /usr/bin/uic

This is ugly:

  • It places files in /usr outside /usr/local that are not maintained by the package manager
  • It places amd64 executables in /usr/lib/arm-linux-gnueabihf which should contain armhf code

Let's see what happens in Qt Creator:

11:14:35: Starting: "/usr/lib/arm-linux-gnueabihf/qt5/bin/qmake" /project.pro -spec linux-g++ CONFIG+=debug CONFIG+=qml_debug
11:14:35: The process "/usr/lib/arm-linux-gnueabihf/qt5/bin/qmake" exited normally.
11:14:35: Starting: "/usr/bin/make" qmake_all
make: Nothing to be done for 'qmake_all'.
11:14:36: The process "/usr/bin/make" exited normally.

and build:

11:15:29: Starting: "/usr/bin/make"
g++ -c -pipe -std=c++11 -g -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DQT_QML_DEBUG -DQT_SVG_LIB -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I../project -I. -isystem /usr/include/arm-linux-gnueabihf/qt5 -isystem /usr/include/arm-linux-gnueabihf/qt5/QtSvg -isystem /usr/include/arm-linux-gnueabihf/qt5/QtWidgets -isystem /usr/include/arm-linux-gnueabihf/qt5/QtGui -isystem /usr/include/arm-linux-gnueabihf/qt5/QtCore -I. -I/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/linux-g++ -o main.o ../project/main.cpp

Unfortunately, it is using g++ even though I configured the kit to use /usr/bin/arm-linux-gnueabihf-* instead.

Trying an explicit override in the .pro:

QMAKE_CXX = /usr/bin/arm-linux-gnueabihf-g++
QMAKE_LINK = /usr/bin/arm-linux-gnueabihf-g++

And the project builds and runs fine on the Raspberry Pi, using a kit and two simple overrides in the .pro, all build dependencies as packaged by Debian, and a toolchain that entirely runs on native code.

Entirely?

$ file /usr/lib/arm-linux-gnueabihf/qt5/bin/qmake
/usr/lib/arm-linux-gnueabihf/qt5/bin/qmake: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=1f5b063926570702f5568b1e5cec6c70d214fc73, stripped
$ dpkg-architecture -q DEB_HOST_ARCH
amd64
$ /usr/lib/arm-linux-gnueabihf/qt5/bin/qmake -v
QMake version 3.0
Using Qt version 5.7.1 in /usr/lib/arm-linux-gnueabihf

Indeed qmake is armhf code that runs happily on my amd64 laptop because I accidentally have qemu-user-static installed.

This strategy produces results, although it depends on a dirty hack. To summarise it:

  • Depend on crossbuild-essential-armhf, qemu-user-static, qemu-system-arm
  • Install :armhf versions of Qt and all build-dependencies
  • Symlink native moc, rcc, uic into the armhf bin directory (argh!)
  • Configure a kit as before
  • Override QMAKE_CXX and QMAKE_LINK in the .pro

Other things to investigate, besides removing the need for that dirty hack:

  • Find out if there is a way to package and install a kit for Qt Creator
  • Find out why qmake is ignoring the compiler settings from the kit and needs overrides in the .pro. Ideally the .pro should be unmodified, so that it can be used for all builds

A native qmake

Would it be possible to build a native version of qmake tweaked to point to all the right bits out of the box?

$ apt source qtbase-opensource-src-5.7.1+dfsg
$ qtbase-opensource-src-5.7.1+dfsg$
$ DEB_HOST_MULTIARCH=arm-linux-gnueabihf DEB_HOST_ARCH_BITS=32 debian/rules override_dh_auto_configure
MAKEFLAGS="-j1" ./configure \
            -confirm-license \
            -prefix "/usr" \
            -bindir "/usr/lib/arm-linux-gnueabihf/qt5/bin" \
            -libdir "/usr/lib/arm-linux-gnueabihf" \
            -docdir "/usr/share/qt5/doc" \
            -headerdir "/usr/include/arm-linux-gnueabihf/qt5" \
            -datadir "/usr/share/qt5" \
            -archdatadir "/usr/lib/arm-linux-gnueabihf/qt5" \
            -plugindir "/usr/lib/arm-linux-gnueabihf/qt5/plugins" \
            -importdir "/usr/lib/arm-linux-gnueabihf/qt5/imports" \
            -translationdir "/usr/share/qt5/translations" \
            -hostdatadir "/usr/lib/arm-linux-gnueabihf/qt5" \
            -sysconfdir "/etc/xdg" \
            -examplesdir "/usr/lib/arm-linux-gnueabihf/qt5/examples" \
            -opensource \
            -plugin-sql-mysql \
            -plugin-sql-odbc \
            -plugin-sql-psql \
            -plugin-sql-sqlite \
            -no-sql-sqlite2 \
            -plugin-sql-tds \
            -system-sqlite \
            -platform linux-g++ \
            -system-harfbuzz \
            -system-zlib \
            -system-libpng \
            -system-libjpeg \
            -system-doubleconversion \
            -openssl \
            -no-rpath \
            -verbose \
            -optimized-qmake \
            -dbus-linked \
            -no-strip \
            -no-separate-debug-info \
            -qpa xcb \
            -xcb \
            -glib \
            -icu \
            -accessibility \
            -compile-examples \
            -no-directfb \
            -gstreamer 1.0 \
            -plugin-sql-ibase -opengl desktop \
…
$ file bin/qmake
bin/qmake: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ba868730cf34c54d7ddf6df0ab4b6ce5c7d6f2a0, not stripped
$ bin/qmake -v
QMake version 3.0
Using Qt version 5.7.1 in /usr/lib/arm-linux-gnueabihf

./configure in Qt builds qmake, and there is no need to run make afterwards. Good.

DEB_HOST_ARCH_BITS=32 is a hack I added to avoid debian/rules using -platform linux-g++-64 instead of -platform linux-g++.

Let's try to use that in Qt Creator:

sudo cp bin/qmake /usr/local/bin/qmake-arm-linux-gnueabihf

Qt Creator autodetects the new qmake and offers it as one of the available versions of Qt. Nice.

The need for the symlink hack is still there:

11:46:51: Starting: "/usr/local/bin/qmake-arm-linux-gnueabihf" ../project.pro -spec linux-g++-64 CONFIG+=debug CONFIG+=qml_debug
sh: 1: /usr/lib/arm-linux-gnueabihf/qt5/bin/rcc: not found

So is the need for the QMAKE_CXX and QMAKE_LINK overrides in the .pro.

Still, this way I could remove qemu-user-static from my system and the project still builds on my laptop and runs on my Raspberry Pi.

The qemu dependency is not needed anymore, the rest of the problems still stand. I wonder, since I'm rebuilding qmake, if there's a way to tell it to use the compilers I want, and the tools I want, removing the need for the dirty hack and the overrides in the .pro files.

qtwebengine5-dev

How about the other project that depends on qtwebengine5-dev?

# apt install qtwebengine5-dev qtwebengine5-dev:armhf
Reading package lists... Done
Building dependency tree
Reading state information... Done
qtwebengine5-dev is already the newest version (5.7.1+dfsg-6.1).
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 qtwebengine5-dev:armhf : Depends: libqt5webengine5:armhf (= 5.7.1+dfsg-6.1) but it is not going to be installed
                          Depends: libqt5webenginecore5:armhf (= 5.7.1+dfsg-6.1) but it is not going to be installed
                          Depends: libqt5webenginewidgets5:armhf (= 5.7.1+dfsg-6.1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

It turns out that something in the big chain of dependencies of qtwebengine5-dev makes the amd64 and armhf versions not coinstallable.

It seems that I have to abandon the idea of installing armhf build-dependencies in the main development system, and I need to start considering to leave the host system untouched to do the native builds, and use a chroot for the cross-compilers and the cross-build-dependencies.

In the next post, I'll see how that goes.

Credits

This has been done as part of my work with Truelite.

28 November, 2017 10:33AM

hackergotchi for Mario Lang

Mario Lang

Qt 5.11 will fix major issues with JAWS on Windows

I published a rant about problems with Qt accessibility on Windows a few months ago. This posting got some unusual amount of attention, as it was shared on HackerNews and almost went to the top for a few minutes.

Apparently, ranting about it after a year of being ignored was not the worst thing to do. I can now confirm that the current dev version of Qt works properly with JAWS for Windows and QTextEdit widgets. This is quite a substantial fix, as it will likely improve the accessibility of many Windows applications written in Qt.

So this bug is finally (after more then a year of waiting) fixed. Thanks to André de la Rocha for implementing UI Automation support, which is apparently what was missing to make JAWS happy.

28 November, 2017 10:22AM by Mario Lang

hackergotchi for Thomas Lange

Thomas Lange

Building your own FAI installation image

I just returned from the MiniDebConf in Cambridge, where I gave two talks. One about building disk images (also for cross architecures) using fai-diskimage with FAI and the other about making d-i easier for beginners. The ideas for this talk were also the inspiration for creating the FAI.me web page. Making the usage of FAI easier, using it without installing it. Is this FAI as a service (FaaS)?

On the web page you can easily configure a customized installation image which will then be created for you. Booting this image, you will get a fully unattended installation, based on FAI technique and all software packages are already included on the installation image. The announcement has more details. I'm very excited to get your feedback for this project.

FAI FAI.me

28 November, 2017 07:59AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#11: (Much) Faster Package (Re-)Installation via Caching

Welcome to the eleventh post in the rarely rued R rants series, or R4 for short. Time clearly flies as it has been three months since out last post on significantly reducing library size via stripping. I had been meaning to post on today's topic for quite some time, but somehow something (working on a paper, releasing a package, ...) got in the way.

Just a few days ago Colin (of Efficient R Programming fame) posted about speed(ing up) package installation. His recommendation? Remember that we (usually) have multiple cores and using several of them via options(Ncpus = XX). It is an excellent point, and it bears repeating.

But it turns I have not one but two salient recommendations too. Today covers the first, we should hopefully get pretty soon to the second. Both have one thing in common: you will be fastest if you avoid doing the work in the first place.

What?

One truly outstanding tool for this in the context of the installation of compiled packages is ccache. It is actually a pretty old tool that has been out for well over a decade, and it comes from the folks that gave the Samba filesystem.

What does it do? Well, it nutshell, it "hashes" a checksum of a source file once the preprocessor has operated on it and stores the resulting object file. In the case of rebuild with unchanged code you get the object code back pretty much immediately. The idea is very similar to memoisation (as implemented in R for example in the excellent little memoise package by Hadley, Jim, Kirill and Daniel. The idea is the same: if you have to do something even moderately expensive a few times, do it once and then recall it the other times.

This happens (at least to me) more often that not in package development. Maybe you change just one of several source files. Maybe you just change the R code, the Rd documentation or a test file---yet still need a full reinstallation. In all these cases, ccache can help tremdendously as illustrated below.

How?

Because essentially all our access to compilation happens through R, we need to set this in a file read by R. I use ~/.R/Makevars for this and have something like these lines on my machines:

VER=
CCACHE=ccache
CC=$(CCACHE) gcc$(VER)
CXX=$(CCACHE) g++$(VER)
CXX11=$(CCACHE) g++$(VER)
CXX14=$(CCACHE) g++$(VER)
FC=$(CCACHE) gfortran$(VER)
F77=$(CCACHE) gfortran$(VER)

That way, when R calls the compiler(s) it will prefix with ccache. And ccache will then speed up.

There is an additional issue due to R use. Often we install from a .tar.gz. These will be freshly unpackaged, and hence have "new" timestamps. This would usually lead ccache to skip to file (fear of "false positives") so we have to override this. Similarly, the tarball is usually unpackage in a temporary directory with an ephemeral name, creating a unique path. That too needs to be overwritten. So in my ~/.ccache/ccache.conf I have this:

max_size = 5.0G
# important for R CMD INSTALL *.tar.gz as tarballs are expanded freshly -> fresh ctime
sloppiness = include_file_ctime
# also important as the (temp.) directory name will differ
hash_dir = false

Show Me

A quick illustration will round out the post. Some packages are meatier than others. More C++ with more templates usually means longer build times. Below is a quick chart comparing times for a few such packages (ie RQuantLib, dplyr, rstan) as well as igraph ("merely" a large C package) and lme4 as well as Rcpp. The worst among theseis still my own RQuantLib package wrapping (still just parts of) the genormous and Boost-heavy QuantLib library.

Pretty dramatic gains. Best of all, we can of course combine these with other methods such as Colin's use of multiple CPUs, or even a simple MAKE=make -j4 to have multiple compilation units being considered in parallel. So maybe we all get to spend less time on social media and other timewasters as we spend less time waiting for our builds. Or maybe that is too much to hope for...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 November, 2017 02:19AM

November 27, 2017

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- November 2017, pt. 2

Here’s are some more of the bugs against the Debian Policy Manual. In particular, there really are quite a few patches needing seconds from DDs. Please consider getting involved.

Consensus has been reached and help is needed to write a patch

#568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar…

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#582109 document triggers where appropriate

#587991 perl-policy: /etc/perl missing from Module Path

#592610 Clarify when Conflicts + Replaces et al are appropriate

#613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS)

#614807 Please document autobuilder-imposed build-dependency alternative re…

#628515 recommending verbose build logs

#664257 document Architecture name definitions

#682347 mark ‘editor’ virtual package name as obsolete

Wording proposed, awaiting review from anyone and/or seconds by DDs

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#636383 10.2 and others: private libraries may also be multi-arch-ified

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#683495 perl scripts: ”#!/usr/bin/perl” MUST or SHOULD?

#688251 Built-Using description too aggressive

#737796 copyright-format: support Files: paragraph with both abbreviated na…

Merged for the next release

#683495 perl scripts: ”#!/usr/bin/perl” MUST or SHOULD?

#877674 [debian-policy] update links to the pdf and other formats of the do…

#878523 [PATCH] Spelling fixes

27 November, 2017 04:40PM

Lior Kaplan

AGPL enforced: The Israeli ICT authority releases code

Data.gov.il was created in 2011 after the Israeli social justice protests as part of the the public participation initiative and started to offer data held by the government. Back then the website was based on Drupal. In 2016 it was changed to CKAN, a designated system for releasing data. This system is licensed under the AGPLv3 requiring source code availability for anyone who can access the the system over a network, de facto for every user.

Since the change to CKAN, open source people asked the state to release the code according to the license but didn’t get a clear answer. All this time when it’s clear it’s violation.  This led Gai Zomer to file a formal complaint in March 2017 with the Israeli State Comptroller. Absurdly, that same month the ICT authority mentioned a policy to release source code it owns, while failing to release code it has taken from others and adapted.

With the end of the summer break and Jew holidays, and after I wasn’t able to get the source, I decided to switch to legal channels, and with the help of Jonathan Klinger and my company, Kaplan Open Source Consulting, we notified they should provide the source code or we’ll address the court.

Well, it worked. In 3 days time the CKAN extensions where available on the website, but in a problematic way, so users weren’t able to download easily. This is why we decided not to publish this code release and let them fix it first. In addition we made it clear all the source code should be available, not only the extensions. Further more, if they already release it’s recommended to use git format instead of just “dumping” a tarball. So we told them if they aren’t going to make a git repository we’ll do that ourselves, but in any case, would prefer them to do that .

While this issue is still pending, the ICT authority had a conference called “the citizen 360” about e-gov and open government in which they reaffirmed their open source plans.

A slide about open source from the Israeli ICT authority presentation

A slide about open source from the Israeli ICT authority presentation

Now, a month later, after our second letter to them, the about page in data.gov.il was updated with links to the ICT authority GitHub account which has the sources for the website and the extensions. A big improvement, and an important mark point as the commit to the repository was done by an official (gov.il) email address.

Beyond congratulating the Israeli ICT authority for their steps forward and the satisfaction of our insisting on them became fruitful, we would like to see the repository get updated on a regular basis, the code being given back to the various CKAN extensions (e.g. Hebrew translation). In general, we hope they would to get inspired by how the how data.gov.uk is doing technical transparency. If we allow ourselves to dream, we would like to see Israel becoming a dominate member in the CKAN community and among the other governments who use it.

We’re happy to be the catalyst for open source in the Israeli government, and we promise to keep insisted where needed. We know that due to other requests and notifications more organizations are on their way to release code.

(This post is a translation from Hebrew of a post in Kaplan Open Source Consulting at https://kaplanopensource.co.il/2017/11/20/data-gov-il-code-release/)


Filed under: Debian GNU/Linux, Fedora, Government Policy, Israeli Community, LibreOffice, PHP, Proud to use free software

27 November, 2017 08:06AM by Kaplan

November 26, 2017

hackergotchi for Clint Adams

Clint Adams

Fundie Ann is still livid about the ark

My high school biology teacher is still alive. His wife is in poor health so they sold the Holsteins, but the rest of the farm is still the same.

You may be wondering what dairy cows have to do with Switzerland.

Posted on 2017-11-26
Tags: etiamdisco

26 November, 2017 08:35PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 1.1.1: Updated and extended

rfoaas greed example

FOAAS upstream is still at release 1.1.0, but added a few new accessors a couple of months ago. So this new version of rfoaas updates to these: asshole(), cup(), fyyff(), immensity(), programmer(), rtfm(), thinking(). We also added test coverage and in doing so noticed that our actual tests never ran on Travis. Yay. Now fixed.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 November, 2017 07:08PM

Sven Hoexter

half-assed Oracle JDK 9 support for java-package

Sitting at home in a not so decent state made me finally fiddle with java-package to deal with Oracle Java 9 builds. For now I've added only some half-assed support for JDK 9 amd64 builds. That's what you download as "jdk-9.0.1_linux-x64_bin.tar.gz" from the Oracle Java pages. It's a works for me thing, but maybe someone finds it useful, the source is here.

git clone https://git.sven.stormbind.net/java-package.git
cd java-package
sed -i -e 's#lib_dir="/usr/share/java-package"#lib_dir="./lib"#' make-jpkg

and you can just start using it in this directory without creating and installing the java-package Debian package.

Side note: If you try out Java within a chroot mount /proc into it. Wasted half an hour to find that out this morning.

26 November, 2017 12:45PM

hackergotchi for Urvika Gola

Urvika Gola

Hack4Climate – Saving Climate while Sailing on the Rhine

Last week in Germany, a few miles away from the meeting in COP23 Conference of political leaders & activists to discuss climate there was a bunch, (100 to be exact) of developers and environmentalists participating in Hack4Climate to work on the same global problem – Climate Change.

COP23, Conference of the Parties happens yearly to discuss and plan action about combating climate change, especially the Paris Agreement. This year, it took place in Bonn, Germany which is the home to United Nations Campus. Despite the ongoing efforts by the government, it’s the need of the hour that every single person living on the Earth, contributes at an personal level to fight this problem. After all, we all have, including myself, somehow contributed to the hike in climate change either knowingly or unknowingly. That’s where role of technology comes in. To create a solution by provide pool of resources and correct facts such that everyone can start taking healthy steps.

I will try to put into words explaining all about the thrilling experience Pranav Jain and I had in participating as 2 of the 100 participants selected all over the world earth for Hack4Climate. Pranav was also working closely with Rockstar Recruiting and Hack4Climate team to spread awareness and bring more participants before the actual event. It was a 4 day hackathon which took place in a *cruise* in front of the United Nations Campus. Before the hackathon began we had informative sessions from the delegates  of various institutions and organisation like UNFCC – United Nations Framework Convention on Climate Change and MIT Media Lab, IOTA, Ethereum. These sessions helped us all to get more insight into the climate problem from a technical and environmental angle. We focussed on using Distributed Ledger Technology – Blockchain & Open Source which can potentially help to combat climate change.

1 (1)Venue of Hack4Climte – The Scenic Crystal Cruise stopping by the UN Campus in Bonn, Germany  (Source)

 

The 20 teams worked on creating solutions which could be fit into areas like identifying and tracking emissions, carbon pricing, distributed energy, sustainable land use, and sustainable transport.

Pranav Jain and I worked on Green – Low Carbon Diamonds through our solution, Chain4Change. We used blockchain to track the carbon emission in the mining of the mineral particularly, diamond. Our project helps in tracking the process of mining, cutting, polishing for every unique diamond which is available for purchase. It could also certify a carbon offset for each process and help the diamond company improve efficiency and save money. Our objective was to track carbon emission throughout the supply chain where we considered the kind of machine, transport and power being used. The technologies used in our solution are Solidity, Android, Python & Web3JS. We integrated all of them on a single platform.

We wanted to raise awareness among the common customers by putting the numbers (carbon footprint) before them such that they know how much energy and fossils were consumed for the particular mineral. This would help them make a smart and climate friendly and a greener decision during their purchase. After all, our climate is more precious than diamonds.

All project tracks had support from a particular company, who gave more insights and support for data and business model. Our project track was sponsored by EverLedger, a company which believes that transparency is the key to ensure ethical trade. 

Copy of H4C-SlidesProject flow, Source – EverLedger

Everledger’s CEO, Leanne talked about women in technology and swiftly made us realize how we need equal representation of all genders to tackle the global problem. I talked about Outreachy with other female participants and amidst such a diverse set of participants, I felt really connected with a few people I met who were open source contributors. Open source community has always been very warm and fun to interact with. We exchanged what conferences we attended like Fosdem, DebConf and what projects we worked on. Outreachy current round 15 is ongoing however, the applications for the next round 16 of Outreachy internships will open in February 2018 for the May to August 2018 internship round. You can check this link here for more information on projects under Debian and Outreachy. Good luck!

Lastly and most importantly, Thank you Nick Beglinger, (CleanTech21 CEO) and his team who put up this extraordinary event despite the initial challenges and made us all believe that yes we can combat climate change by moving further, faster and together.

Thank you Debian, for always supporting us:)

A few pictures…

2Pranav Jain picthing the final product
4Scenic Crystal, Rhine River and Hack4Climate Tee

Chain4Change Team Members – Pranav Jain, Toshant Sharma, Urvika Gola

Thanks for reading!


26 November, 2017 12:39PM by urvikagola

Renata D'Avila

Chosing a system for the blog

This blog was created because I am supposed to report my journey through the Outreachy internship.

Let me start by saying that I'm biased towards systems that use flat files for blogs instead of the ones that require a database. It is so much easier to make the posts available through other means (such as having them backed up in a Git repository) that assure their content will live on even if the site is taken down or dies. It is also so much better to download the content this way, instead of pulling down a huge database file, which may cost a significant amount of money to transfer that amount of data. Having flat files with your content with a format that is shared among many systems (such as Markdown) might also assure a smooth transition to a new system, should the change become a necessity at some point.

I have experimented some options while working on projects. I played with Lektor while contributing to PyBeeWare. I liked Lektor, but I found it's documentation severely lacking. I worked with Grav while we were working towards getting tem.blog.br back online. Grav is a good CMS and it is definitely an alternative to Wordpress, but, well, it needs a server to host it.

At first, I thought about using Jekyll. It is a good site generator and it even has a Code Academy course on how to create a website and deploy it to Github Pages that I took a while ago. I could have chosen it to develop this blog, but it is written in Ruby. Which is fine, of course. The first steps I took into learning how to program were in Ruby, using Chris Pine's Learn to Program | versão pt-br. So, what is my objection with Ruby? It so happens that I expect most of the development for the Outreachy project will be done using Python (and maybe some Javascript) and I thought that adding a third language might make my life a bit harder.

That is how I ended up with Pelican. I had played a bit with it while contributing to the PyLadies Brazil website. During Python Sul, the regional Python conference we had last September, we also had a sprint to make the PyLadies Caxias do Sul website using Pelican and hosting it with Github Pages. It went smoothly. Look how awesome it turned out:

Image of the PyLadies Caxias do Sul website, white background and purple text

So, how to do one of those? Hang on tight, that I will explain it in detail on my next post! ;)

26 November, 2017 12:00PM by Renata