On the meaning of "we"
Rather than as a word of endearment, I'm starting to see "we" as a word of entitlement.
In some moments of insecurity, I catch myself "wee"-ing over other people, to claim them as mine.
Rather than as a word of endearment, I'm starting to see "we" as a word of entitlement.
In some moments of insecurity, I catch myself "wee"-ing over other people, to claim them as mine.
After months of trying, I've finally got my hands on a Nintendo NES Classic Mini. It's everything I wish retropie was: simple, reliable, plug-and-play gaming. I didn't have a NES at the time, so the games are all mostly new to me (although I'm familiar with things like Super Mario Brothers).
The two main complaints about the NES classic are the very short controller cable and the need to press the "reset" button on the main unit to dip in and out of games. Both are addressed by the excellent 8bitdo Retro Receiver for NES Classic bundle. You get a bluetooth dongle that plugs into the classic and a separate wireless controller. The controller is a replica of the original NES controller. However, they've added another two buttons on the right-hand side alongside the original "A" and "B", and two discrete shoulder buttons which serve as turbo-repeat versions of "A" and "B". The extra red buttons make it look less authentic which is a bit of a shame, and are not immediately useful on the NES classic (but more on that in a minute).
With the 8bitdo controller, you can remotely activate the Reset button by pressing "Down" and "Select" at the same time. Therefore the whole thing can be played from the comfort of my sofa.
That's basically enough for me, for now, but in the future if I want to expand the functionality of the classic, it's possible to mod it. A hack called "Hakchi2" lets you install additional NES ROMs; install retroarch-based emulator cores and thus play SNES, Megadrive, N64 (etc. etc.) games; as well as other hacks like adding "down+select" Reset support to the wired controller. If you were playing non-NES games on the classic, then the extra buttons on the 8bitdo become useful.
Here's what happened in the Reproducible Builds effort between Sunday February 26 and Saturday March 4 2017:
Ed Maste will present Reproducible Builds in FreeBSD at AsiaBSDCon 2017.
Ximin Luo will present Reproducible builds, its uses and the future at Open Source Days in Copenhagen on March 18.
Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" in Darmstadt, Germany, about Reproducible Builds everywhere on March 23.
Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.
Aspiration Tech published a very detailed report on our Reproducible Builds World Summit 2016 in Berlin.
Duncan published a very thorough post on the Rust Programming Language Forum about reproducible builds in the Rust compiler and toolchain.
In particular, he produced a table recording the reproducibility of different build products under different individual variations, totalling 187 build+variation combinations.
Chris Lamb:
Dhole:
60 package reviews have been added, 8 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues.
1 issue type has been added:
During our reproducibility testing, FTBFS bugs have been detected and reported by:
diffoscope 78 was uploaded to unstable and jessie-backports by Mattia Rizzolo. It included contributions from:
xxd work on jessie again. (Closes: #855239)normalize_zeros to more generic utils.data module.In addition, the following changes were made on the experimental branch:
This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.
As I announced in mailing lists a few days ago, the Debian SunCamp (DSC2017) is happening again this May.
SunCamp different to most other Debian events. Instead of a busy schedule of talks, SunCamp focuses on the hacking and socialising aspect, without making it just a Debian party/vacation.
![]() |
The idea is to have 4 very productive days, staying in a relaxing and comfy environment, working on your own projects, meeting with your team, or presenting to fellow Debianites your most recent pet project.
![]() |
We have tried to make this event the simplest event possible, both for organisers and attendees. There will be no schedule, except for the meal times at the hotel. But these can be ignored too, there is a lovely bar that serves snacks all day long, and plenty of restaurants and cafés around the village.
The SunCamp is an event to get work done, but there will be time for relaxing and socialising too.
![]() |
![]() |
Do you fancy a hack-camp in a place like this?
One of the things that makes the event simple, is that we have negotiated a flat price for accommodation that includes usage of all the facilities in the hotel, and optionally food. We will give you a booking code, and then you arrange your accommodation as you please, you can even stay longer if you feel like it!
The rooms are simple but pretty, and everything has been renovated very recently.
We are not preparing a talks programme, but we will provide the space and resources for talks if you feel inclined to prepare one.
You will have a huge meeting room, divided in 4 areas to reduce noise, where you can hack, have team discussions, or present talks.
Do you want to see more pictures? Check the full gallery
Debian SunCamp 2017
Hotel Anabel, LLoret de Mar, Province of Girona, Catalonia, Spain
May 18-21, 2017
Tempted already? Head to the wikipage and register now, it is only 2 months away!
Please try to reserve your room before the end of March. The hotel has reserved a number of rooms for us until that time. You can reserve a room after March, but we can't guarantee the hotel will still have free rooms.
To be honest, at this stage I'd actually prefer ads in Wikipedia to having ever more intrusive begging for donations. Please go away soon.
Over the years, administrating thousand of NFS mounting linux computers at the time, I often needed a way to detect if the machine was experiencing NFS hang. If you try to use df or look at a file or directory affected by the hang, the process (and possibly the shell) will hang too. So you want to be able to detect this without risking the detection process getting stuck too. It has not been obvious how to do this. When the hang has lasted a while, it is possible to find messages like these in dmesg:
nfs: server nfsserver not responding, still trying
nfs: server nfsserver OK
It is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed.
While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux.
The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me.
The content for a NFS mount point look similar to this:
[...]
device /dev/mapper/Debian-var mounted on /var with fstype ext3
device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
age: 7863311
caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
sec: flavor=1,pseudoflavor=1
events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0
bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809
RPC iostats version: 1.0 p/v: 100003/3 (nfs)
xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
per-op statistics
NULL: 0 0 0 0 0 0 0 0
GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
READLINK: 125 125 0 20472 18620 0 1112 1118
READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
MKDIR: 3680 3680 0 773980 993920 26 23990 24245
SYMLINK: 903 903 0 233428 245488 6 5865 5917
MKNOD: 80 80 0 20148 21760 0 299 304
REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
RMDIR: 3367 3367 0 645112 484848 22 5782 6002
RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
FSINFO: 2 2 0 232 328 0 1 1
PATHCONF: 1 1 0 116 140 0 0 0
COMMIT: 0 0 0 0 0 0 0 0
device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
[...]
The key number to look at is the third number in the per-op list. It is the number of NFS timeouts experiences per file system operation. Here 22 write timeouts and 5 access timeouts. If these numbers are increasing, I believe the machine is experiencing NFS hang. Unfortunately the timeout value do not start to increase right away. The NFS operations need to time out first, and this can take a while. The exact timeout value depend on the setup. For example the defaults for TCP and UDP mount points are quite different, and the timeout value is affected by the soft, hard, timeo and retrans NFS mount options.
The only way I have been able to get working on Debian and RedHat Enterprise Linux for getting the timeout count is to peek in /proc/. But according to Solaris 10 System Administration Guide: Network Services, the 'nfsstat -c' command can be used to get these timeout values. But this do not work on Linux, as far as I can tell. I asked Debian about this, but have not seen any replies yet.
Is there a better way to figure out if a Linux NFS client is experiencing NFS hangs? Is there a way to detect which processes are affected? Is there a way to get the NFS mount going quickly once the network problem causing the NFS hang has been cleared? I would very much welcome some clues, as we regularly run into NFS hangs.

Great news! The Netfilter project has been elected by Google to be a mentoring organization in this year Google Summer of Code program. Following the pattern of the last years, Google seems to realise and support the importance of this software project in the Linux ecosystem.
I will be proudly mentoring some student this 2017 year, along with Eric Leblond and of course Pablo Neira.
The focus of the Netfilter project has been in nftables for the last years, and the students joining our community will likely work on the new framework.
For prospective students: there is an ideas document which you must read. The policy in the Netfilter project is to encourage students to send patches before they are elected to join us. Therefore, a good starting point is to subscribe to the mailing lists, download the git code repositories, build by hand the projects (compilation) and look at the bugzilla (registration required).
Due to this type of internships and programs, I believe is interesting to note the ascending involvement of women in the last years. I can remember right now: Ana Rey (@AnaRB), Shivani Bhardwaj (@tuxish), Laura García and Elise Lennion (blog).
On a side note, Debian is not participating in GSoC this year :-(
Since I use this as base for other PHP packages like SimKolab, I’ve updated my packaging example with:
The old features (Apache 2.2 and 2.4 support, dbconfig-common, etc.) are, of course, still there. Support for other webservers could be contributed by you, and I could extend the autoloader to work at runtime (using dpkg triggers) to include dependencies as packaged in other Debian packages. See, nobody needs “composer”! ☻
Feel free to check it out, play around with it, install it, test it, send me improvement patches and feature requests, etc. — it’s here with a mirror at GitHub (since I wrote it myself and the licence is permissive enough anyway).
This posting and the code behind it are sponsored by my employer ⮡ tarent.
08 March, 2017 09:15PM by MirOS Developer tg (tg@mirbsd.org)
After quite a bit of work, we finally have the sponsorship brochure produced for GUADEC and GNOME.Asia. Huge thanks to everyone who helped, I’m really pleased with the result. Again, if you or your company are interested in sponsoring us, please drop a mail to sponsors@guadec.org!
I like food, and I like games. So this week there was a couple of awesome sneak previews on the upcoming GNOME 3.24 release. Matthias Clasen posted about GNOME Recipes the 1.0 release – tasty snacks are now available directly on the desktop, which means I can also view them when I’m at the back of the house in the kitchen, where the wifi connection is somewhat spotty. Adrien Plazas also posted about GNOME Games – now I can get my retro gaming fix easily.

I was sent a package in the post, with lots of blank stickers and a couple of pens. I’ve now signed a load of stickers, and my hand hurts. More details about exactly what this is about soon :)
08 March, 2017 09:02PM by Neil McGovern
Chris sat in the window seat in the row behind his parents. Actually he also sat in half of his neighbor’s seat. His neighbor was uncomfortable but said nothing and did not attempt to lower the armrest to try to contain his girth.
His parents were awful human beings: selfish, self-absorbed and controlling. “Chris,” his dad would say, “look out the window!” His dad was the type of officious busybody who would snitch on you at work for not snitching on someone else.
“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his dad would insist that it was very important that he look out the window to see a very important cloud or glacial landform.
Chris would comply and then return to his book and music.
“Chris,” his mom would say, “you need to review our travel itinerary.” His mom cried herself to sleep when she heard that Nigel Stock died, gave up on ever finding True Love, and resolved to achieve a husband and child instead.
“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his mom would insist that it was very important that review photos and prose regarding their managed tour package in Costa Rica, because he wouldn’t want to show up there unprepared. Chris would passive-aggressively stare at each page of the packet, then hand it back to his mother.
It was already somewhat clear that due to delays in taking off they would be missing their connecting flight to Costa Rica. About ⅓ of the passengers on the aeroplane were also going to Costa Rica, and were discussing the probable missed connection amongst themselves and with the flight staff.
Chris’s parents were oblivious to all of this, despite being native speakers of English. Additionally, just as they were unaware of what other people were discussing, they imagined that no one else could hear their private family discussions.
Everyone on the plane missed their connecting flights. Chris’s parents continued to be terrible human beings.
So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.
Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.
There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?
What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.
The following contributors got their Debian Developer accounts in the last two months:
The following contributors were added as Debian Maintainers in the last two months:
Congratulations!
07 March, 2017 11:30PM by Jean-Pierre Giraud
This is a mini workshop as an introduction into using Ansible for the administration of Debian systems.
As an example it’s shown how this configuration management tool can be used to remotely set up a simple WSGI application running on an Apache web server on a Debian installation to make it available on the net.
The application which is used as an example is httpbin by Runscope.
This is an useful HTTP request service for the development of web software or any other purposes which features a number of specific endpoints that can be used for different testing matters.
For example, the address http://<address>/user-agent of httpbin returns the user agent identification of the client program which has been used to query it (that’s taken from the header of the request).
There are official instances of this request server running on the net, like the one at http://httpbin.org/.
WSGI is a widespread standard for programming web application in Python, and httpbin is implemented in Python using the Flask web framework.
The basis of the workshop is a simple base installation of an up-to-date Debian 8 “Jessie” on a demonstration host, and the latest official release of that is 8.7. As a first step, the installation has to be switched over to the “testing” branch of Debian, because the Debian packages of httpbin are comparatively new and are going to be introduced into the “stable” branch of the archive the first time with the upcoming major release number 9 “Stretch”. After that, the Apache packages which are needed to make it available (apache2 and libapache2-mod-wsgi – other web servers of course could be used instead), and which are not part of a base installation, are installed from the archive. The web server then gets launched remotely, and the httpbin package will be also pulled and the service is going to be integrated into Apache to run on that. To achieve that, two configuration files must be deployed on the target system, and a few additional operations are needed to get everything working together. Every step is preconfigured within Ansible so that the whole process could be launched by a single command on the control node, and could be run on a single or a number of comparable target machines automatically and reproducibly.
If a server is needed for trying this workshop out, straightforward cloud server instances are available on the net for example at DigitalOcean, but – let me underline this – there are other cloud providers which offer the same things, too! If it’s needed for experiments or other purposes only for a limited time, low priced “droplets” are available here which are billed by the hour. After being registered, the machine(s) which is/are wanted could be set up easily over the web interface (choose “Debian 8.7” as OS), but there are also command line clients available like doctl (which is not yet available as a Debian package). For the convenient use of a droplet the user should generate a SSH key pair on the local machine, first:
$ ssh-keygen -t rsa -b 4096 -C "john@doe.com" -f ~/.ssh/mykey
The public part of the key ~/.ssh/mykey.pub then can be uploaded into the user account before the droplet is going to be created, it then could be integrated automatically.
There is a good introduction on the whole process available in the excellent tutorial series serversforhackers.com, here.
Ansible then can use the SSH keypair to login into a droplet without the need to type in the password every time.
On a cloud server like this carrying a Debian base system, the examples in this workshop can be tried well.
Ansible works client-less and doesn’t need to be installed on the remote system but only on the control node, however a Python 2.7 interpreter is needed there (the base system of DigitalOcean includes that).
For that Ansible can do anything on them, remote servers which are going to be controlled must be added to /etc/ansible/hosts.
This is a configuration file in the INI format for DNS names and IP addresses.
For a flexible organisation of the server inventory it’s possible to group hosts here, IP ranges could be given, and optional variables can be used among other useful things (the default file contains a couple of examples).
One or a couple of servers (in Ansible they are called “hosts”) on which something particular is going to be happening (like httpbin is going to be installed) could be added like this (the group name is arbitrary):
[httpbin]
192.0.2.0
Whether Ansible could communicate with the hosts in the group and actually can operate on them can be verified by just pinging them like this:
$ ansible httpbin -m ping -u root --private-key=~/.ssh/mykey
192.0.2.0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
The command succeeded well, so it appears there isn’t no significant problem regarding this machine.
The return value changed:false indicates that there haven’t been any changes on that host as a result of the execution of this command.
Next to ping there are several other modules which could be used with the command line tool ansible the same way, and these modules are actually something like the core components of Ansible.
The module shell for example can be used to execute shell commands on the remote machine like uname to get some system information returned from the server:
$ ansible httpbin -m shell -a "uname -a" -u root --private-key=~/.ssh/mykey
192.0.2.0 | SUCCESS | rc=0 >>
Linux debian-512mb-fra1-01 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
In the same way, the module apt could be used to remotely install packages. But with that there’s no major advantage over other software products that offer a similar functionality, and using those modules on the command line is just the basics of Ansible usage.
Playbooks in Ansible are YAML scripts for the manipulation of the registered hosts in /etc/ansible/hosts.
Different tasks can be defined here for successive processing, like a simple playbook for changing the package source from “stable” to “testing” for example goes like this:
---
- hosts: httpbin
tasks:
- name: remove "jessie" package source
apt_repository: repo='deb http://mirrors.digitalocean.com/debian jessie main' state=absent
- name: add "testing" package source
apt_repository: repo='deb http://httpredir.debian.org/debian testing main contrib non-free' state=present
- name: upgrade packages
apt: update_cache=yes upgrade=dist
First, like used with the CLI tool ansible above, the targeted host group httpbin is chosen.
The default user “root” and the SSH key could be fixed here, too, to spare the need to give them on the command line.
Then there are three tasks defined to get worked down consecutively:
With the module apt_repository the preset package source “jessie” is removed from /etc/apt/sources.list.
Then, a new package source for the “testing” archive gets added to /etc/apt/sources.list.d/ by using the same module (by the way, mirrors.digitalocean.org also provides testing, though, and that might be faster).
After that, the apt module is used to upgrade the package inventory (it performs apt-get dist-upgrade), after an update of the package cache has taken place (by running apt-get update)
A playbook like this (the filename is arbitrary, but commonly carries the suffix .yml) can be run by the CLI tool ansible-playbook, like this:
$ ansible-playbook httpbin.yml -u root --private-key=~/.ssh/mykey
Ansible then works down the individual “plays” of the tasks on the remote server(s) top-down, and due to a high speed net connection and SSD block device hardware the change of the system to being a Debian Testing base installation only takes around a minute to complete in the cloud.
While working, Ansible puts out status reports for the individual operations.
If certain changes on the base system have been taken place already like when a playbook is run through one more time, the modules of course sense that and return just the information that the system haven’t been changed because it’s already there what have been wanted to change to.
Beyond the basic playbook which is shown here there are more advanced features like register and when available to bind the execution of a play to the error-free result of a previous one.
The apt module then can be used in the playbook to install the three needed binary packages one after another:
- name: install apache2
apt: pkg=apache2 state=present
- name: install mod_wsgi
apt: pkg=libapache2-mod-wsgi state=present
- name: install httpbin
apt: pkg=python-httpbin state=present
The Debian packages are configured in a way that the Apache web server is running immediately after installation, and the Apache module mod_wsgi is automatically integrated.
If that would be otherwise desired, there are Ansible modules available for operating on Apache which can reverse this if that is wanted.
By the way, after the package have been installed the httpbin server can be launched with python -m httpbin.core, but this runs only a mini web server which is not suitable for productive use.
To get httpbin running on the Apache web server two configuration files are needed. They could be set up in the project directory on the control node and then uploaded onto the remote machine with another Ansible module. The file httpbin.wsgi (the name is again arbitrary) contains only a single line which is the starter for the WSGI application to run:
from httpbin import app as application
The module copy can be used to deploy that script on the host, while the target folder /var/www/httpbin must be set up before that by the module file.
In addition to that, a separate user account like “httpbin” (the name is also arbitrary but picked up in the other config file) is needed to run it, and the module user can set this up.
The demonstrational playbook continues, and the plays which are performing these three operations are going like this:
- name: mkdir /var/www/httpbin
file: path=/var/www/httpbin state=directory
- name: set up user "httpbin"
user: name=httpbin
- name: copy WSGI starter
copy: src=httpbin.wsgi dest=/var/www/httpbin/httpbin.wsgi owner=httpbin group=httpbin mode=0644
Another configuration script httpbin.conf is needed for Apache on the remote server to include the WSGI application httpbin running as a virtual host. It goes like this:
<VirtualHost *>
WSGIDaemonProcess httpbin user=httpbin group=httpbin threads=5
WSGIScriptAlias / /var/www/httpbin/httpbin.wsgi
<Directory /var/www/httpbin>
WSGIProcessGroup httpbin
WSGIApplicationGroup %{GLOBAL}
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
This file needs to be copied into the folder /etc/apache2/sites-available on the host, which already exists when the apache2 package is installed.
The remaining operations which are missing to get anything running together are:
The default welcome screen of Apache blocks anything else and should be disabled by Apache’s CLI tool a2dissite.
And after that, the new virtual host needs to be activated with the complementary tool a2ensite – both could be run remotely by the module command.
Then the Apache server on the remote machine must be restarted to read in the new configuration.
You’ve guessed it already, that’s all easy to perform with Ansible:
- name: deploy configuration script
copy: src=httpbin.conf dest=/etc/apache2/sites-available owner=root group=root mode=0644
- name: deactivate default welcome screen
command: a2dissite 000-default.conf
- name: activate httpbin virtual host
command: a2ensite httpbin.conf
- name: restart Apache
service: name=apache2 state=restarted
That’s it.
After this playbook has been performed by Ansible on a (or several) freshly set up remote Debian base installation completely, then the httpbin request server is available running on the Apache web server and could be queried from anywhere by a web browser, or for example by curl:
$ curl http://192.0.2.0/user-agent
{
"user-agent": "curl/7.50.1"
}
With the broad set of Ansible modules and the playbooks a lot of tasks can be accomplished like the example problem which has been explained here. But the range of functions of Ansible however is still even more comprehensive, but to discuss that would have blown the frame of this blog post. For example the playbooks offer more advanced features like event handler which can be used for recurring operations like the restart of Apache in more extensive projects. And beyond playbooks, templates could be set up in the roles which can behave differently on selected machine groups – Ansible uses Jinja2 as template engine for that. And the scope of functions of the basic modules could be expanded by employing external tools.
To drop a word on why it could be useful in certain situations to run own instances of the httpbin request server instead of using the official ones which are provided on the net by Runscope:
Like some people would prefer to run a private instance for example in the local network instead of querying the one on the internet.
Or for some development reasons a couple or even a large number of identical instances might be needed – Ansible is ideal for setting them up automatically.
Anyway, the Javascript bindings to the tracking services like Google Analytics in httpbin/templates/trackingscripts.html are patched out in the Debian package.
That could be another reason to prefer to set up an own instance on a Debian server.
07 March, 2017 02:26PM by Hideki Yamane (noreply@blogger.com)
RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.
The RProtoBuf 0.4.9 release is the fourth and final update this weekend following the request by CRAN to not use package= in .Call() when PACKAGE= is really called for.
Some of the code in RProtoBuf 0.4.9 had this bug; some other entry points had neither (!!). With the ongoing drive to establish proper registration of entry points, a few more issues were coming up, all of which are now addressed. And we had some other unreleased minor cleanup, so this made for a somewhat longer (compared to the other updates this weekend) NEWS list:
Changes in RProtoBuf version 0.4.9 (2017-03-06)
A new file
init.cwas added with calls toR_registerRoutines()andR_useDynamicSymbols()Symbol registration is enabled in
useDynLibSeveral missing
PACKAGE=arguments were added to the corresponding.CallinvocationsTwo (internal) C++ functions were renamed with suffix
_cppto disambiguate them from R functions with the same nameAll of above were part of #26
Some editing corrections were made to the introductory vignette (David Kretch in #25)
The 'configure.ac' file was updated, and renamed from the older converntion 'configure.in', along with 'src/Makevars'. (PR #24 fixing #23)
CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
The RVowpalWabbit package update is the third of four upgrades requested by CRAN, following RcppSMC 0.1.5 and RcppGSL 0.3.2.
This package being somewhat raw, the change was simple and just meant converting the single entry point to using Rcpp Attributes -- which addressed the original issue in passing.
No new code or features were added.
We should mention that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch.
More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.
RcppGSL release 0.3.2 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.
So RcppGSL was updated too, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.
No new code or features were added. The NEWS file entries follow below:
Changes in version 0.3.2 (2017-03-04)
In the
fastLmfunction,.Callnow uses the correctPACKAGE=argumentAdded file
init.cwith calls toR_registerRoutines()and R_useDynamicSymbols(); also use.registration=TRUEinuseDynLibinNAMESPACEThe skeleton configuration for created packages was updated.
Courtesy of CRANberries, a summary of changes to the most recent release is available.
More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
RcppSMC provides Rcpp-based bindings to R for them Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article.
RcppSMC release 0.1.5 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.
Hence RcppSMC was updated, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.
No new code or features were added. The NEWS file entries follow below:
Changes in RcppSMC version 0.1.5 (2017-03-03)
Correct
.Callto usePACKAGE=argument
DESCRIPTION,NAMESPACE,README.mdchanges to comply with currentR CMD checklevelsAdded file
init.cwith calls toR_registerRoutines()and R_useDynamicSymbols()Updated
.travis.ymlfile for continuous integration
Courtesy of CRANberries, there is a diffstat report for this release.
More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
For about six months I’ve been using a Raspberry Pi 3 as my desktop computer at home.
The overall experience is fine, but I had to do a few adjustments.
First was to use KeePass, the second to compile gcc for cross-compilation (ie use buildroot).
I’m using KeePass + KeeFox to maintain my passwords on the various websites (and avoid reusing the same everywhere).
For this to work on the Raspberry Pi, one need to use mono from Xamarin:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list sudo apt-get update sudo apt-get upgrade sudo apt-get install mono-runtime
The install instruction comes from mono-project and the initial pointer was found on raspberrypi forums, stackoverflow and Benny Michielsen’s blog.
And for some plugin to work I think I had to apt-get install mono-complete.
Using the Raspberry Pi 3, I recovered an old project based on buildroot for the raspberry pi 2. And just for building the tool-chain I had a few issues.
First the compilation would stop during mnp compilation:
/usr/bin/gcc -std=gnu99 -c -DHAVE_CONFIG_H -I. -I.. -D__GMP_WITHIN_GMP -I.. -DOPERATION_divrem_1 -O2 -Wa,--noexecstack tmp-divrem_1.s -fPIC -DPIC -o .libs/divrem_1.o tmp-divrem_1.s: Assembler messages: tmp-divrem_1.s:129: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11' tmp-divrem_1.s:145: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11' tmp-divrem_1.s:158: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11' tmp-divrem_1.s:175: Error: selected processor does not support ARM mode `mls r1,r4,r3,r8' tmp-divrem_1.s:209: Error: selected processor does not support ARM mode `mls r11,r4,r12,r3' Makefile:768: recipe for target 'divrem_1.lo' failed make[]: *** [divrem_1.lo] Error 1
I Googled the error and found this post on the Raspberry Pi forum not really helpful…
But I finally found an explanation on Jan Hrach’s page on the subject.
The raspbian distribution is still optimized for the first Raspberry Pi so basically the compiler is limited to the old raspberypi instructions. While I was compiling gcc for a Raspberry Pi 2 so needed the extra ones.
The proposed solution is to basically update raspbian to debian proper.
While this is a neat idea, I still wanted to get some raspbian specific packages (like the kernel) but wanted to be sure that everything else comes from debian. So I did some apt pinning.
First I experienced that pinning is not sufficient so when updating source.list with plain debian Jessie, make sure to add theses lines before the raspbian lines:
# add official debian jessie (real armhf gcc) deb http://ftp.fr.debian.org/debian/ jessie main contrib non-free deb-src http://ftp.fr.debian.org/debian/ jessie main deb http://security.debian.org/ jessie/updates main deb-src http://security.debian.org/ jessie/updates main deb http://ftp.fr.debian.org/debian/ jessie-updates main deb-src http://ftp.fr.debian.org/debian/ jessie-updates main
Then run the following to get the debian gpg keys, but don’t yet upgrade your system:
apt update apt install debian-archive-keyring
Now, let’s add the pinning.
First if you were using APT::Default-Release "stable"; in your apt.conf (as I did) remove it. It does not mix well with fine grained pinning we will then implement.
Then, fill your /etc/apt/preferences file with the following:
# Debian Package: * Pin: release o=Debian,a=stable,n=jessie Pin-Priority: 700 # Raspbian Package: * Pin: release o=Raspbian,a=stable,n=jessie Pin-Priority: 600 Package: * Pin: release o=Raspberry Pi Foundation,a=stable,n=jessie Pin-Priority: 600 # Mono Package: * Pin: release v=7.0,o=Xamarin,a=stable,n=wheezy,l=Xamarin-Stable,c=main Pin-Priority: 800
Note: You can use apt-cache policy (no parameter) to debug pinning.
The pinning above is mainly based on the origin field of the repositories (o=)
Finally you can upgrade your system:
apt update apt-cache policy gcc rm /var/cache/apt/archives/* apt upgrade apt-cache policy gcc
Note: Removing the cache ensure we download the packages from debian as raspbian is using the exact same naming but we now they are not compiled with a real armhf tool-chain.
The build stopped on recipe for target 's-attrtab' failed. There are many references on the web, that one was easy, it ‘just’ need more memory, so I added some swap on the external SSD I was already using to work on buildroot.
That’s it for today, not bad considering my last post was more that 3 years ago…
05 March, 2017 04:25PM by Julien
FTP assistant
This month you didn’t hear much of me, as I only marked 97 packages for accept and rejected 17 packages. I only sent one email to maintainers asking questions.
Nevertheless the NEW queue is down to 46 packages at the moment, so my fellows in misery do a really good job :-).
Debian LTS
This was my thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
This month my all in all workload has been 13.00h. During that time I did uploads of
Thanks again to all the people who complied with my requests to test a package!
I also prepared the Jessie DSA for tnef which resulted in DSA 3798-1.
At the end of the month I did another week of frontdesk work and among other things I filed some bugs against packages from [1].
[1] https://security-tracker.debian.org/tracker/status/unreported
Other stuff
Reading about openoverlayrouter in the German magazine c’t, I uploaded that software to Debian.
I also uploaded npd6, which helped me to reach github from a IPv6-only-machine.
Further I uploaded pyicloud.
As my DOPOM for this mont I adopted bottlerocket. Though you can’t buy the hardware anymore, there still seem to be some users around.
05 March, 2017 04:16PM by alteholz
Org mode is a package for Emacs to “keep notes, maintain todo lists, planning projects and authoring documents”. It can execute embedded snippets of code and capture the output (through Babel). It’s an invaluable tool for documenting your infrastructure and your operations.
Here are three (relatively) short videos exhibiting Org mode use in the context of network operations. In all of them, I am using my own junos-mode which features the following perks:
Since some Junos devices can be quite slow, commits and remote executions are done asynchronously1 with the help of a Python helper.
In the first video, I take some notes about configuring BGP add-path feature (RFC 7911). It demonstrates all the available features of junos-mode.
In the second video, I execute a planned operation to enable this feature in production. The document is a modus operandi and contains the configuration to apply and the commands to check if it works as expected. At the end, the document becomes a detailed report of the operation.
In the third video, a cookbook has been prepared to execute some changes. I set some variables and execute the cookbook to apply the change and check the result.
05 March, 2017 11:01AM by Vincent Bernat

For people who are visually differently-abled, the above reads –
“To learn who rules over you, simply find out who you are not allowed to criticize” – Voltaire wrote this either in late 16th century or early 17th century and those words were as apt in those times, as it is in these turbulent times as well.
Update 05/03 – According to @bla these words are attributable to a neo-nazi and apparently a child abuser. While I don’t know the context in which it was shared, it describes the environment in which we are perfectly. Please see his comment for a link and better understanding.
The below topic requires a bit of maturity, so if you are easily offended, feel free not to read further.
While this week-end I was supposed to share about the recent Science Day celebrations that we did last week –

Would explore it probably next week.
This week the attempt is to share thoughts which had been simmering at the back of my mind for more than 2 weeks or more and whose answers are not clear to me.
My buttons were pressed when Martin f. Kraft shared about a CoC violation and the steps taken therein. While it is easy to say with 20:20 hind-sight to say that the gentleman acted foolishly, I don’t really know the circumstances to pass the judgement so quickly. In reality, while I didn’t understand the ‘joke’ in itself, I have to share some background by way of anecdotes as to why it isn’t so easy for me to give a judgement call.
a. I don’t know the topics chosen by stand-up comedians in other countries, in India, most of the stand-up acts are either about dating or sex or somewhere in-between, which is lovingly given the name ‘Leela’ (dance of life) in Indian mythology. I have been to several such acts over the years at different events, different occasions and 99.99% of the time I would see them dealing with pedophilia, necrophilia and all sorts of deviants in sexuality and people laughing wildly, but couple of times when the comedian shared the term ‘sex’ with people, educated, probably more than a few world-travelled middle to higher-middle class people were shocked into silence. I had seen this not in once but 2-3 times in different environments and was left wondering just couple of years back ‘ Is sex such a bad word that people get easily shocked ? ‘ Then how is it that we have 1.25 billion + people in India. There had to be some people having sex. I don’t think that all 1.25 billion people are test-tube babies.
b. This actually was what lead to my quandary last year when my sharing of ‘My Experience with Debian’ which I had carefully prepared for newbies, seeing seasoned debian people, I knew my lame observations wouldn’t cut ice with them and hence had to share my actual story which involved a bit of porn. I was in two minds whether or not to say it till my eyes caught a t-shirt on which it was said ‘We make porn’ or something to that effect. That helped me share my point.
c. Which brings me to another point, it seems it is becoming increasingly difficult to talk about anything either before apologizing to everyone and not really knowing who will take offence at what and what the repercussions might be. In local sharings, I always start with a blanket apology that if I say something that offends you, please let me know afterwards so I can work on it. As the term goes ‘You can’t please everyone’ and that is what happens. Somebody sooner or later would take offence at something and re-interpret it in ways which I had not thought of.
![]()
From the little sharings and interactions I have been part of, I find people take offence at the most innocuous things. For instance, one of the easy routes of not offending anyone is to use self-deprecating humour (or so I thought) either of my race, caste, class or even my issues with weight and each of the above would offend somebody. Charlie Chaplin didn’t have those problems. If somebody is from my caste, I’m portraying the caste in a certain light, a certain slant. If I’m talking about weight issues, then anybody who is like me (fat) feels that the world is laughing at them rather than at me or they will be discriminated against. While I find the last point a bit valid, it leaves with me no tools and no humour. I neither have the observational powers or the skills that Kapil Sharma has and have to be me.
While I have no clue what to do next, I feel the need to also share why humour is important in any sharing.-
a. Break – When any speaker uses humour, the idea is to take a break from a serious topic. It helps to break the monotony of the talk especially if the topic is full of jargon talk and new concepts. A small comedic relief brings the attendees attention back to the topic as it tends to wander in a long monotonous talk.
b. Bridge – Some of the better speakers use one or more humourous anecdote to explain and/or bridge the chasm between two different concepts. Some are able to produce humour on the fly while others like me have to rely on tried and tested methods.
There is one another thing as well, humour is seems to be a mixture of social, cultural and political context and its very easy to have it back-fired upon you.
For instance, I attempted humour on refugees, probably not the best topic to try humour in the current political climate, and predictably, it didn’t go down well. I had to share and explain about Robin Williams slightly dark yet humorous tale in ‘Moscow on the Hudson‘ The film provides comedy and pathos in equal measure. You are left identifying with Vladimir Ivanoff (Robin Williams character) especially in the last scene where he learns of his grand-mother dying and he remembers her and his motherland, Russia and plays a piece on his saxophone as a tribute both to his grand-mother and the motherland. Apparently, in the height of the cold war, if a Russian defected to United States (land of Satan and other such terms used) you couldn’t return to Russia.
The movie, seen some years back left a deep impact on me. For all the shortcomings and ills that India has, even if I could, would and could I be happy anywhere else ? The answers are not so easy. With most NRI’s (Non-Resident Indians) who emigrated for good did it not so much for themselves but for their children. So the children would hopefully have a better upbringing, better facilities, better opportunities than they would have got here.
I talked to more than a few NRI’s and while most of them give standardized answers, talking awhile and couple of beers or their favourite alcohol later, you come across deeply conflicted human beings whose heart is in India and their job, profession and money interests compel them to be in the country where they are serving.
And Indian movies further don’t make it easy for the Indian populace when trying to integrate into a new place. Some of the biggest hits of yesteryear’s were about having the distinct Indian culture in their new country while the message of most countries is integration. I know of friends who are living in Germany who have to struggle through their German in order to be counted as a citizen, the same I guess is true of other countries as well, not just the language but the customs as well. They also probably struggle with learning more than one language and having an amalgamation of values which somehow they and their children have to make sense of.
I was mildly shocked last week to learn that Mishi Choudary had to train people in the U.S. to differentiate between Afghan turban styles of wearing and the Punjabi style of wearing the turban. A simple search on ‘Afghani turban’ and ‘Punjabi turban’ reveals that there are a lot of differences between the two cultures. In fact, the way they talk, the way they walk, there are lots that differentiate the two cultures.
The second shocking video was of an African-American man racially abusing an Indian-American girl. At first, I didn’t believe it till I saw the video on facebook.
My point through all that is it seems humour, that clean, simple exercise which brings a smile to you and uplifts the spirit doesn’t seem to be as easy as it once was.
Comments, suggestions, criticisms all are welcome.
05 March, 2017 03:43AM by shirishag75
I use Replicant on my main Samsung S3 mobile phone. Replicant is a fully free Android distribution. One consequence of the “fully free” means that some functionality is not working properly, because the hardware requires non-free software. I am in the process of upgrading my main phone to the latest beta builds of Replicant 6. Getting GPS to work on Replicant/S3 is not that difficult. I have made the decision that I am willing to compromise on freedom a bit for my Geocaching hobby. I have written before how to get GPS to work on Replicant 4.0 and GPS on Replicant 4.2. When I upgraded to Wolfgang’s Replicant 6 build back in September 2016, it took some time to figure out how to get GPS to work. I prepared notes on non-free firmware on Replicant 6 which included a section on getting GPS to work. Unfortunately, that method requires that you build your own image and has access to the build tree. Which is not for everyone. This writeup explains how to get GPS to work on a Replicant 6 without building your own image. Wolfgang already explained how to add all other non-free firmware to Replicant 6 but it did not cover GPS. The reason is that GPS requires non-free software to run on your main CPU. You should understand the consequences of this before proceeding!

The first step is to download a Replicant 6.0 image, currently they are available from the replicant 6.0 forum thread. Download the replicant-6.0-i9300.zip file and flash it to your phone as usual. Make sure everything (except GPS of course) works, after loading other non-free firmware (Wifi, Bluetooth etc) using "./firmwares.sh i9300 all" that you may want. You can install the Geocaching client c:geo via fdroid by adding fdroid.cgeo.org as a separate repository. Start the app and verify that GPS does not work. Keep the replicant-6.0-i9300.zip file around, you will need it later.
The tricky part about GPS is that the daemon is started through the init system of Android, specified by the file /init.target.rc. Replicant ships with the GPS part commented out. To modify this file, we need to bring out our little toolbox. Modifying the file on the device itself will not work, the root filesystem is extracted from a ramdisk file on every boot. Any changes made to the file will not be persistent. The file /init.target.rc is stored in the boot.img ramdisk, and that is the file we need to modify to make a persistent modification.
First we need the unpackbootimg and mkbootimg tools. If you are lucky, you might find them pre-built for your operating system. I am using Debian and I couldn’t find them easily. Building them from scratch is however not that difficult. Assuming you have a normal build environment (i.e., apt-get install build-essentials) try the following to build the tools. I was inspired by a post on unpacking and editing boot.img for some of the following instructions.
git clone https://github.com/CyanogenMod/android_system_core.git cd android_system_core/ git checkout cm-13.0 cd mkbootimg/ gcc -o ./mkbootimg -I ../include ../libmincrypt/*.c ./mkbootimg.c gcc -o ./unpackbootimg -I ../include ../libmincrypt/*.c ./unpackbootimg.c sudo cp mkbootimg unpackbootimg /usr/local/bin/
You are now ready to unpack the boot.img file. You will need the replicant ZIP file in your home directory. Also download the small patch I made for the init.target.rc file: https://gitlab.com/snippets/1639447. Save the patch as replicant-6-gps-fix.diff in your home directory.
mkdir t cd t unzip ~/replicant-6.0-i9300.zip unpackbootimg -i ./boot.img mkdir ./ramdisk cd ./ramdisk/ gzip -dc ../boot.img-ramdisk.gz | cpio -imd patch < ~/replicant-6-gps-fix.diff
Assuming the patch applied correctly (you should see output like "patching file init.target.rc" at the end) you will now need to put the ramdisk back together.
find . ! -name . | LC_ALL=C sort | cpio -o -H newc -R root:root | gzip > ../new-boot.img-ramdisk.gz cd .. mkbootimg --kernel ./boot.img-zImage \ --ramdisk ./new-boot.img-ramdisk.gz \ --second ./boot.img-second \ --cmdline "$(cat ./boot.img-cmdline)" \ --base "$(cat ./boot.img-base)" \ --pagesize "$(cat ./boot.img-pagesize)" \ --dt ./boot.img-dt \ --ramdisk_offset "$(cat ./boot.img-ramdisk_offset)" \ --second_offset "$(cat ./boot.img-second_offset)" \ --tags_offset "$(cat ./boot.img-tags_offset)" \ --output ./new-boot.img
Reboot your phone to the bootloader:
adb reboot bootloader
Then flash the new boot image back to your phone:
heimdall flash --BOOT new-boot.img
The phone will restart. To finalize things, you need the non-free GPS software components glgps, gps.exynos4.so and gps.cer. Before I used a complicated method involving sdat2img.py to extract these files from a CyanogenMod 13.x archive. Fortunately, Lineage OS is now offering downloads containing the relevant files too. You will need to download some files, extract them, and load them onto your phone.
wget https://mirrorbits.lineageos.org/full/i9300/20170125/lineage-14.1-20170125-experimental-i9300-signed.zip mkdir lineage cd lineage unzip ../lineage-14.1-20170125-experimental-i9300-signed.zip adb root adb wait-for-device adb remount adb push system/bin/glgps /system/bin/ adb push system/lib/hw/gps.exynos4.vendor.so /system/lib/hw/gps.exynos4.so adb push system/bin/gps.cer /system/bin/
Now reboot your phone and start c:geo and it should find some satellites. Congratulations!
04 March, 2017 05:08PM by simon
(There is a French translation of this announcement.)
Join us in Paris, on May 13-14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).
Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.
The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome :)
Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.
Here's what we will be doing:
Before diving into the exact content of this event, a few words from the organization team.
This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.
We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.
We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting our Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after. There will be a team ready to act if you feel you have been or are being harassed or made uncomfortable by an attendee.
We believe that money should not be a blocker for contributing to Debian. We will sponsor travel and find a place to sleep for as many enthusiastic attendees as possible. The space where this event will take place is accessible to wheelchairs. We are trying to organize a translation into (probably French) sign language. Vegan food should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.
There will be a number of stations, i.e. physical space dedicated to people with a given set of skills, hosted by someone who is ready to make this space welcoming and productive.
A few stations are already organized, and are described below. If you want to host a station yourself, or would like us to organize another one, please let us know. For example, you may want to assess the state of Debian Stretch for a specific field of interest, be it audio editing, office work, network auditing or whatever you are doing with Debian :)
We will test Debian Stretch and gather feedback. We are particularly interested in:
Experienced Debian contributors will be ready to relay this feedback to the relevant teams so it is put to good use. Hypra collaborators will be there to bring a focus on universal access technologies.
Truth be told, Debian lacks people who are good at design and graphics. There are definitely a good number of low-hanging fruits that can be tackled in a week-end, either in Debian per-se, or in upstream) projects, or in Debian derivatives.
This station will be hosted by Juliette Belin. She designed the themes for the last two versions of Debian.
Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.
This station will be hosted by Solveig. She has experience in bug triaging, in Debian and elsewhere. Cyril Brulebois, a long-time Debian developer, will provide support and advice upon request.
There will be a Bug Squashing Party.
See https://wiki.debian.org/BSP/2017/05/fr/Paris for the exact address and time.
Please register by the end of March if you want to attend this event!
If you have any questions, please get in touch with us.
Today while looking up something in my productivity nemesis Wikipedia, I was waylaid by an article on Harshad numbers. (Link goes to Wolfram Mathworld out of spite.) This was interesting to me because Harshad is my patronym. Harshad numbers are also known as Niven numbers but what kind of a crazy name is that? A Harshad number is a positive integer that is divisible by the sum of its' digits.
1-10 are obviously Harshad numbers in base 10. 11 is not because 1 + 1 = 2 and 11 is not divisible by 2. 12 is because 1 + 2 = 3 and 12 is divisible by 3. And so on...you get the picture. Things get even more interesting when you deal with other bases.
Here is a perl one-liner to get the base 10 Harshad numbers from 1 to 100.
$ perl -E '$i=0+shift;for$n(1..$i){$t=0;map{ $t+=$_+0}(split q{},$n);print $n%$t==0?"$n ":q{};}say' 100
1 2 3 4 5 6 7 8 9 10 12 18 20 21 24 27 30 36 40 42 45 48 50 54 60 63 70 72 80 81 84 90 100
For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.
A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.
Weblate 2.12 has been released today, few days behind schedule. It brings improved screenshots management, better search and replace features or improved import. Many of the new features were already announced in previous post, where you can find more details about them.
Full list of changes:
If you are upgrading from older version, please follow our upgrading instructions.
You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.
Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.
Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments
The Internet saw Github's new TOS yesterday and collectively shrugged.
That's weird..
I don't have any lawyers, but the way Github's new TOS is written, I feel I'd need to consult with lawyers to understand how it might affect the license of my software if I hosted it on Github.
And the license of my software is important to me, because it is the legal framework within which my software lives or dies. If I didn't care about my software, I'd be able to shrug this off, but since I do it seems very important indeed, and not worth taking risks with.
If I were looking over the TOS with my lawyers, I'd ask these questions...
4 License Grant to Us
This seems to be saying that I'm granting an additional license to my software to Github. Is that right or does "license grant" have some other legal meaning?
If the Free Software license I've already licensed my software under allows for everything in this "License Grant to Us", would that be sufficient, or would my software still be licenced under two different licences?
There are violations of the GPL that can revoke someone's access to software under that license. Suppose that Github took such an action with my software, and their GPL license was revoked. Would they still have a license to my software under this "License Grant to Us" or not?
"Us" is actually defined earlier as "GitHub, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees". Does this mean that if someone say, does some brief contracting with Github, that they get my software under this license? Would they still have access to it under that license when the contract work was over? What does "affiliates" mean? Might it include other companies?
Is it even legal for a TOS to require a license grant? Don't license grants normally involve an intentional action on the licensor's part, like signing a contract or writing a license down? All I did was loaded a webpage in a browser and saw on the page that by loading it, they say I've accepted the TOS. (I then set about removing everything from Github.)
Github's old TOS was not structured as a license grant. What reasons might they have for structuring this TOS in such a way?
Am I asking too many questions only 4 words into this thing? Or not enough?
Your Content belongs to you, and you are responsible for Content you post even if it does not belong to you. However, we need the legal right to do things like host it, publish it, and share it. You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.
If this is a software license, the wording seems rather vague compared with other software licenses I've read. How much wiggle room is built into that wording?
What are the chances that, if we had a dispute and this came before a judge, that Github's laywers would be able to find a creative reading of this that makes "do things like" include whatever they want?
Suppose that my software is javascript code or gets compiled to javascript code. Would this let Github serve up the javascript code for their users to run as part of the process of rendering their website?
That means you're giving us the right to do things like reproduce your content (so we can do things like copy it to our database and make backups); display it (so we can do things like show it to you and other users); modify it (so our server can do things like parse it into a search index); distribute it (so we can do things like share it with other users); and perform it (in case your content is something like music or video).
Suppose that Github modified my software, does not distribute the modified version, but converts it to javascipt code and distributes that to their users to run as part of the process of rendering their website. If my software is AGPL licensed, they would be in violation of that license, but doesn't this additional license allow them to modify and distribute my software in such a way?
This license does not grant GitHub the right to sell your Content or otherwise distribute it outside of our Service.
I see that "Service" is defined as "the applications, software, products, and services provided by GitHub". Does that mean at the time I accept the TOS, or at any point in the future?
If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not?
If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?
5 License Grant to Other Users
Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).
Let's say that company Foo does something with my software that violates its GPL license and the license is revoked. So they no longer are allowed to copy my software under the GPL, but it's there on Github. Does this "License Grant to Other Users" give them a different license under which they can still copy my software?
The word "fork" has a particular meaning on Github, which often includes modification of the software in a repository. Does this mean that other users could modify my software, even if its regular license didn't allow them to modify it or had been revoked?
How would this use of a platform-specific term "fork" be interpreted in this license if it was being analized in a courtroom?
If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.
This paragraph seems entirely innocious. So, what does your keen lawyer mind see in it that I don't?
How sure are you about your answers to all this? We're fairly sure we know how well the GPL holds up in court; how well would your interpretation of all this hold up?
What questions have I forgotten to ask?
And finally, the last question I'd be asking my lawyers:
What's your bill come to? That much? Is using Github worth that much to me?
I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.
First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.
The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.
My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.
Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.
D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.
D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.
Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.
D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.
This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.
D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.
D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.
D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.
There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.
These are notes from my research that led to the publication of the password hashers article. This article is more technical than the previous ones and compares the various cryptographic primitives and algorithms used in the various software I have reviewed. The criteria for inclusion on this list is fairly vague: I mostly included a password hasher if it was significantly different from the previous implementations in some way, and I have included all the major ones I could find as well.
Nic Wolff claims to be the first to have written such a program, all the way back in 2003. Back then the hashing algorithm was MD5, although Wolff has now updated the algorithm to use SHA-1 and still maintains his webpage for public use. Another ancient but unrelated implementation, is the Standford University Applied Cryptography's pwdhash software. That implementation was published in 2004 and unfortunately, that implementation was not updated and still uses MD5 as an hashing algorithm, but at least it uses HMAC to generate tokens, which makes the use of rainbow tables impractical. Those implementations are the simplest password hashers: the inputs are simply the site URL and a password. So the algorithms are, basically, for Wolff's:
token = base64(SHA1(password + domain))
And for Standford's PwdHash:
token = base64(HMAC(MD5, password, domain)))
Another unrelated implementation that is still around is supergenpass is a bookmarklet that was created around 2007, originally using MD5 as well but now supports SHA512 now although still limited to 24 characters like MD5 (which needlessly limits the entropy of the resulting password) and still defaults MD5 with not enough rounds (10, when key derivation recommendations are more generally around 10 000, so that it's slower to bruteforce).
Note that Chris Zarate, the supergenpass author, actually credits Nic Wolff as the inspiration for his implementation. Supergenpass is still in active development and is available for the browser (as a bookmarklet) or mobile (as an webpage). Supergenpass allows you to modify the password length, but also add an extra profile secret which adds to the password and generates a personalized identicon presumably to prevent phishing but it also introduces the interesting protection, the profile-specific secret only found later in Password Hasher Plus. So the Supergenpass algorithm looks something like this:
token = base64(SHA512(password + profileSecret + ":" + domain, rounds))
Another popular implementation is the Wijjo Password Hasher, created around 2006. It was probably the first shipped as a browser extension which greatly improved the security of the product as users didn't have to continually download the software on the fly. Wijjo's algorithm also improved on the above algorithms, as it uses HMAC-SHA1 instead of plain SHA-1 or HMAC-MD5, which makes it harder to recover the plaintext. Password Hasher allows you to set different password policies (use digits, punctuation, mixed case, special characters and password length) and saves the site names it uses for future reference. It also happens that the Wijjo Password Hasher, in turn, took its inspiration on different project, hashapass.com, created in 2006 and also based on HMAC-SHA-1. Indeed, hashapass "can easily be generated on almost any modern Unix-like system using the following command line pattern":
echo -n parameter \
| openssl dgst -sha1 -binary -hmac password \
| openssl enc -base64 \
| cut -c 1-8
So the algorithm here is obviously:
token = base64(HMAC(SHA1, password, domain + ":" + counter)))[:8]
... although in the case of Password Hasher, there is a special routine that takes the token and inserts random characters in locations determined by the sum of the values of the characters in the token.
Years later, in 2010, Eric Woodruff ported the Wijjo Password Hasher to Chrome and called it Password Hasher Plus. Like the original Password Hasher, the "plus" version also keeps those settings in the extension and uses HMAC-SHA-1 to generate the password, as it is designed to be backwards-compatible with the Wijjo Password Hasher. Woodruff did add one interesting feature: a profile-specific secret key that gets mixed in to create the security token, like what SuperGenPass does now. Stealing the master password is therefore not enough to generate tokens anymore. This solves one security concern with Password Hasher: an hostile page could watch your keystrokes and steal your master password and use it to derive passwords on other sites. Having a profile-specific secret key, not accessible to the site's Javascript works around that issue, but typing the master password directly in the password field, while convenient, is just a bad idea, period. The final algorithm looks something like:
token = base64(HMAC(SHA1, password, base64(HMAC(SHA1, profileSecret, domain + ":" + counter))))
Honestly, that seems rather strange, but it's what I read from the source code, which is available only after decompressing the extension nowadays. I would have expected the simplest version:
token = base64(HMAC(SHA1, HMAC(SHA1, profileSecret, password), domain + ":" + counter))
The idea here would be "hide" the master password from bruteforce attacks as soon as possible... But maybe this is all equivalent.
Regardless, Password Hasher Plus then takes the token and applies the same special character insertion routine as the Password Hasher.
Last year, Guillaume Vincent a french self-described "humanist and scuba diving fan" released the lesspass extension for Chrome, Firefox and Android. Lesspass introduces several interesting features. It is probably the first to include a commandline version. It also uses a more robust key derivation algorithm (PBKDF2) and takes into account the username on the site, allowing multi account support. The original release (version 1) used only 8192 rounds which is now considered too low. In the bug report it was interesting to note that LessPass couldn't do the usual practice of running the key derivation for 1 second to determine the number of rounds needed as the results need to be deterministic.
At first glance, the LessPass source code seems clear and easy to read which is always a good sign, but of course, the devil is in the details. One key feature that is missing from Password Hasher Plus is the profile-specific seed, although it should be impossible, for a hostile web page to steal keystrokes from a browser extension, as far as I know.
The algorithm then gets a little more interesting:
entropy = PBKDF2(SHA256, masterPassword, domain + username + counter, rounds, length)
where
rounds=10000
length=32
entropy is then used to pick characters to match the chosen profile.
Regarding code readability, I got quickly confused by the PBKDF2 implementation: SubtleCrypto.ImportKey() doesn't seem to support PBKDF2 in the API, yet it's how it is used there... Is it just something to extract key material? We see later what looks like a more standard AES-based PBKDF2 implementation, but this code looks just strange to me. It could be me unfamilarity with newer Javascript coding patterns, however.
There is also a lesspass-specific character picking routing that is also not base64, and different from the original Password Hasher algorithm.
A review of password hashers would hardly be complete without mentioning the Master Password and its elaborate algorithm. While the applications surrounding the project are not as refined (there is no web browser plugin and the web interface can't be easily turned into a bookmarklet), the algorithm has been well developed. Of all the password managers reviewed here, Master Password uses one of the strongest key derivation algorithms out there, scrypt:
key = scrypt( password, salt, cost, size, parallelization, length )
where
salt = "com.lyndir.masterpassword" + len(username) + name
cost = 32768
size = 8
parallelization = 2
length = 64
entropy = hmac-sha256(key, "com.lyndir.masterpassword" + len(domain) + domain + counter )
Master Password the uses one of 6 sets of "templates" specially
crafted to be "easy for a user to read from a screen and type using a
keyboard or smartphone" and "compatible with most site's password
policies", our "transferable" criteria defined
in the first passwords article. For example, the default template
mixes vowels, consonants, numbers and symbols, but carefully avoiding
possibly visibly similar characters like O and 0 or i and 1
(although it does mix 1 and l, oddly enough).
The main strength of Master Password seems to be the clear definition of its algorithm (although Hashpass.com does give out OpenSSL commandline examples...), which led to its reuse in another application called freepass. The Master Password app also doubles as a stateful password manager...
I have also considered including easypasswords, which uses PBKDF2-HMAC-SHA1, in my list of recommendations. I discovered only recently that the author wrote a detailed review of many more password hashers and scores them according to their relative strength. In the end, I ended up covering more LessPass since the design is very similar and LessPass does seem a bit more usable. Covering LessPass also allowed me to show the contrast and issues regarding the algorithm changes, for example.
It is also interesting to note that the EasyPasswords author has criticized the Master Password algorithm quite severely:
[...] scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary[...]
During a discussion with the Master Password author, he outlined that "there is nothing "easy" about brute-force deriving a 64-byte key through a SHA-256 algorithm." SHA-256 is used in the last stage because it is "extremely fast". scrypt is used as a key derivation algorithm to generate a large secret and is "intentionnally slow": "we don't want it to be easy to reverse the master password from a site password". "But it' unnecessary for the second phase because the input to the second phase is so large. A master password is tiny, there are only a few thousand or million possibilities to try. A master key is 8^64, the search space is huge. Reversing that doesn't need to be made slower. And it's nice for the password generation to be fast after the key has been prepared in-memory so we can display site passwords easily on a mobile app instead of having to lock the UI a few seconds for every password."
Finally, I considered covering Blum's Mental Hash (also covered here and elsewhere). This consists of an algorithm that can basically be ran by the human brain directly. It's not for the faint of heart, however: if I understand it correctly, it will require remembering a password that is basically a string of 26 digits, plus compute modulo arithmetics on the outputs. Needless to say, most people don't do modulo arithmetics every day...
February marked the 22nd month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:
Nothing exciting, just some minor fixes at several places:
Just over 2 years ago, I started packaging Prometheus for Debian. It was a daunting task, mainly because the Golang ecosystem is so young, almost none of its dependencies were packaged, and upstream practices are very different from Debian's.
Today I want to announce that this is complete.
The main part of the Prometheus stack is available in Debian testing, which very soon will become the Stretch release:
1.5.2+ds-2 - Monitoring system and time series database0.5.1+ds-7 - Handle and deliver alerts created by Prometheus0.13.0+ds-1 - Prometheus exporter for machine metrics0.3.1+ds-1 - Prometheus exporter for ephemereal jobsThese are available for a bunch of architectures (sadly, not all of them), and are the most important building blocks to deploy Prometheus in your network.
I have also finished preparing backports for all the required dependencies, and jessie-backports has a complete Prometheus stack too!
Adding to these, the archive already has the client libraries for Go, Perl, Python, and Django; and a bunch of other exporters. Except for the Postgres exporter, most of these are going to be in the Stretch release, and I plan to prepare backports for Jessie too:
Note that not all of these have been packaged by me, luckily other Debianites are also working on Prometheus packaging!
I am confident that Prometheus is going to become one of the main monitoring tools in the near future, and I am very happy that Debian is the first distribution to offer official packages for it. If you are interested, there is still lots of work ahead. Patches, bug reports, and co-maintainers are welcome!
Update 3/3/2017: Today the Perl client library was uploaded to unstable, and it is waiting for ftp-master approval.
We've just introduced a new base Oracle Linux 7-slim Docker image that's a minuscule 114MB. Ok, so it's not quite as small as Alpine, but it's now the smallest base image of any of the major distributions. Check out the numbers in the graph to see just how small.It's not fair, Oracle. You talked about -slim image for Oracle Linux, then you should do for other distros, too.
$ sudo docker image lsDebian's jessie-slim image is 80MB, smaller than oraclelinux 7-slim image.
REPOSITORY TAG IMAGE ID CREATED SIZE
debian jessie-slim 232f5cd0c765 2 days ago 80 MB
debian jessie 978d85d02b87 2 days ago 123 MB
oraclelinux 7-slim f005b5220b05 8 days ago 114 MB
debian stretch-slim 02ee50628785 2 days ago 57.1 MBIt's smaller than Oracle's -slim image by default, and 1/2 size for -slim image. Nice, isn't it? :)
debian stretch 6f4d77d39d73 2 days ago 100 MB
02 March, 2017 05:03AM by Hideki Yamane (noreply@blogger.com)
A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:
% cat /proc/sys/kernel/random/entropy_avail; \
dd bs=1M if=/dev/random of=/dev/null count=1; \
for n in $(seq 1 5); do \
cat /proc/sys/kernel/random/entropy_avail; \
sleep 1; \
done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%
The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:
% cat /proc/sys/kernel/random/entropy_avail; \
dd bs=1M if=/dev/random of=/dev/null count=1; \
for n in $(seq 1 5); do \
cat /proc/sys/kernel/random/entropy_avail; \
sleep 1; \
done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%
Quite the difference. :) I bought a few more than I need, in case someone want to buy one here in Norway. :)
Update: The dongle was presented at Debconf last year. You might find the talk recording illuminating. It explains exactly what the source of randomness is, if you are unable to spot it from the schema drawing available from the ChaosKey web site linked at the start of this blog post.
As mentioned in my previous post, I’ll be posting regularly with an update on what I’ve been up to as the GNOME Executive Director, and highlighting some cool stuff around the project!
If you find this dull, they’re tagged with [update-post] so hopefully, you can filter them out. And dear planet.debian.org folk – if this annoys you too much I can turn the feed category to turn this off it’s not interesting enough :) However, if you like these or have any suggestions for things you’d like to see here, let me know.
One of the areas we’ve been working on is the sponsorship brochure for GUADEC and GNOME.Asia. Big thanks to Allan Day and the Engagement team for helping out here – and I’m pleased to say it’s almost finished! In the meantime, if you or your company are interested in sponsoring us, please drop a mail to sponsors@guadec.org!
A fairly lengthy and wide-ranging interview with myself has been published at cio.com. It covers a bit of my background (although mistakenly says I worked for Collabora Productivity, rather than Collabora Limited!), and looks at a few different areas on where I see GNOME and how it sits within the greater GNU/Linux movement – I cover “some uncomfortable subjects around desktop Linux”. It’s well worth a read.
The GNOME 3.24 release is happening soon! As such, the release team announced the string freeze. If you want to help out with how much has been translated into your language, then https://wiki.gnome.org/TranslationProject/JoiningTranslation is a good place to start. I’d like to give a shout out to the translation teams in particular too. Sometimes people don’t realise how much work goes into this, and it’s fantastic that we’re able to reach so many more users with our software.
GNOME is now announced as a mentoring organisation for Google Summer of Code! There are some great ideas for Summer (Well, in the Northern hemisphere anyway) projects, so if you want to spend your time coding on Free Software, and get paid for it, why not sign up as a student?
01 March, 2017 06:38PM by Neil McGovern
So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.
I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).
So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:
# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf
Listen [::1]:8080
<IfModule ssl_module>
Listen [::1]:4443
</IfModule>
<IfModule mod_gnutls.c>
Listen [::1]:4443
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.
So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend any_http
option httplog
option forwardfor
acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
tcp-request connection expect-proxy layer4 if is_from_proxy
bind :::80
default_backend any_http
backend any_http
server apache2 ::1:8080
Obviously after that you then do:
systemctl restart apache2 systemctl restart haproxy
Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:
a2enmod remoteip
And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:
LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined
And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:
RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1
CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined
Now, enable the config and restart apache2:
a2enconf remoteip-logformats systemctl restart apache2
Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.
So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.
Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.
/etc/dehydrated/conf.d/mail.sh:
CONTACT_EMAIL="my@email.address"
/etc/dehydrated/conf.d/domainconfig.sh:
DOMAINS_D="/etc/dehydrated/domains.d"
/etc/dehydrated/domains.d/srwpi.hostedpi.com:
HOOK="/etc/dehydrated/hooks/srwpi"
/etc/dehydrated/hooks/srwpi:
#!/bin/sh
action="$1"
domain="$2"
case $action in
deploy_cert)
privkey="$3"
cert="$4"
fullchain="$5"
chain="$6"
cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
chmod 640 /etc/ssl/private/srwpi.pem
;;
*)
;;
esac
/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.
And finally the file /etc/dehydrated/domains.txt:
www.srwpi.hostedpi.com srwpi.hostedpi.com
Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.
Run dehydrated in cron mode (it's noisy, but meh...):
dehydrated -c
That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:
a2ensite default-ssl a2enmod ssl
And restart apache2:
systemctl restart apache2
Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:
frontend any_https
option httplog
option forwardfor
acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
tcp-request connection expect-proxy layer4 if is_from_proxy
bind :::443 ssl crt /etc/ssl/private/srwpi.pem
default_backend any_https
backend any_https
server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt
Restart haproxy:
systemctl restart haproxy
And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.
01 March, 2017 06:35PM by Brett Parker (iDunno@sommitrealweird.co.uk)
Github recently drafted an update to their Terms Of Service. The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license. I'll leave the full analysis to the lawyers, but see Thorsten's analysis.
I contacted Github about this weeks ago, and received only an anodyne response. The Free Software Foundation was also talking with them about it. It seems that Github doesn't care or has some reason to want to effectively neuter copyleft software licenses.
The possibility that a Github TOS change could force a change to the license of your software hosted there, and that it's complicated enough that I'd have to hire a lawyer to know for sure, makes Github not worth bothering to use.
Github's value proposition was never very high for me, and went negative now.
I am deleting my repositories from Github at this time. If you used the Github mirrors for git-annex, propellor, ikiwiki, etckeeper, myrepos, click on the links for the non-Github repos (git.joeyh.name also has mirrors). Also, github-backup has a new website and repository off of github. (There's an oncoming severe weather event here, so it may take some time before I get everything deleted and cleaned up.)
[Some commits to git-annex were pushed to Github this morning by an automated system, but I had NOT accepted their new TOS at that point, and explicitly do NOT give Github or anyone any rights to git-annex not granted by its GPL and AGPL licenses.]
See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS
Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!
01 March, 2017 03:12PM by Brett Parker (iDunno@sommitrealweird.co.uk)
01 March, 2017 07:32AM by Junichi Uekawa
The libesedb Debian backport was sponsored by my employer. All other work was done on a volunteer basis.
Please use the correct (perma)link to bookmark this article, not the page listing all wlog entries of the last decade. Thank you.</update>
Some updates inline and at the bottom.
The new Terms of Service of GitHub became effective today, which is quite problematic — there was a review phase, but my reviews pointing out the problems were not answered, and, while the language is somewhat changed from the draft, they became effective immediately.
Now, the new ToS are not so bad that one immediately must stop using their service for disagreement, but it’s important that certain content may no longer legally be pushed to GitHub. I’ll try to explain which is affected, and why.
I’m mostly working my way backwards through section D, as that’s where the problems I identified lie, and because this is from easier to harder.
Note that using a private repository does not help, as the same terms apply.
Section D.7 requires the person uploading content to waive any and all attribution rights. Ostensibly “to allow basic functions like search to work”, which I can even believe, but, for a work the uploader did not create completely by themselves, they can’t grant this licence.
The CC licences are notably bad because they don’t permit sublicencing, but even so, anything requiring attribution can, in almost all cases, not “written or otherwise, created or uploaded by our Users”. This is fact, and the exceptions are few.
Section D.5 requires the uploader to grant all other GitHub users…
Note that section D.4 is similar, but granting the licence to GitHub (and their successors); while this is worded much more friendly than in the draft, this fact only makes it harder to see if it affects works in a similar way. But that doesn’t matter since D.5 is clear enough. (This doesn’t mean it’s not a problem, just that I don’t want to go there and analyse D.4 as D.5 points out the same problems but is easier.)
This means that any and all content under copyleft licences is also no longer welcome on GitHub.
Some licences are famous for requiring people to keep the original intact while permitting patches to be piled on top; this is actually permissible for Open Source, even though annoying, and the most common LaTeX licence is rather close to that. Section D.3 says any (partial) content can be removed — though keeping a PKZIP archive of the original is a likely workaround.
Anything copyleft (GPL, AGPL, LGPL, CC-*-SA) or requiring attribution (CC-BY-*, but also 4-clause BSD, Apache 2 with NOTICE text file, …) are affected. BSD-style licences without advertising clause (MIT/Expat, MirOS, etc.) are probably not affected… if GitHub doesn’t go too far and dissociates excerpts from their context and legal info, but then nobody would be able to distribute it, so that’d be useless.
Only “continuing to use GitHub” constitutes accepting the new terms. This means that repositories from people who last used GitHub before March 2017 are excluded.
Even then, the new terms likely only apply to content uploaded in March 2017 or later (note that git commit dates are unreliable, you have to actually check whether the contribution dates March 2017 or later).
And then, most people are likely unaware of the new terms. If they upload content they themselves don’t have the appropriate rights (waivers to attribution and copyleft/share-alike clauses), it’s plain illegal and also makes your upload of them or a derivate thereof no more legal.
Granted, people who, in full knowledge of the new ToS, share any “User-Generated Content” with GitHub on or after 1ˢᵗ March, 2017, and actually have the appropriate rights to do that, can do that; and if you encounter such a repository, you can fork, modify and upload that iff you also waive attribution and copyleft/share-alike rights for your portion of the upload. But — especially in the beginning — these will be few and far between (even more so taking into account that GitHub is, legally spoken, a mess, and they don’t even care about hosting only OSS / Free works).
I’ll be starting to remove any such content of mine, such as the source code mirrors of jupp, which is under the GNU GPLv1, now and will be requesting people who forked such repositories on GitHub to also remove them. This is not something I like to do but something I am required to do in order to comply with the licence granted to me by my upstream. Anything you’ve found contributed by me in the meantime is up for review; ping me if I forgot something. (mksh is likely safe, even if I hereby remind you that the attribution requirement of the BSD-style licences still applies outside of GitHub.)
(Pet peeve: why can’t I “adopt a licence” with British spelling? They seem to require oversea barbarian spelling.)
Atlassian Bitbucket has similar terms (even worse actually; I looked at them to see whether I could mirror mksh there, and turns out, I can’t if I don’t want to lose most of what few rights I retain when publishing under a permissive licence). Gitlab seems to not have such, but requires you to indemnify them… YMMV. I think I’ll self-host the removed content.
I’m in contact with someone from GitHub Legal (not explicitly in the official capacity though) and will try to explain the sheer magnitude of the problem and ways to solve this (leaving the technical issues to technical solutions and requiring legal solutions only where strictly necessary), but for now, the ToS are enacted (another point of my criticism of this move) and thus, the aforementioned works must go off GitHub right now.
That’s not to say they may not come back later once this all has been addressed, if it will be addressed to allow that. The new ToS do have some good; for example, the old ToS said “you allow every GitHub user to fork your repositories” without ever specifying what that means. It’s just that the people over at GitHub need to understand that, both legally and technically¹, any and all OSS licences² grant enough to run a hosting platform already³, and separate explicit grants are only needed if a repository contains content not under an OSI/OKFN/Copyfree/FSF/DFSG-free licence. I have been told that “these are important issues” and been thanked for my feedback; we’ll see what comes from this.
① maybe with a little more effort on the coders’ side³
② All licences on one of those lists or conformant to the DFSG, OSD or OKD should do⁴.
③ e.g. when displaying search results, add a note “this is an excerpt, click HERE to get to the original work in its context, with licence and attribution” where “HERE” is a backlink to the file in the repository
④ It is understood those organisations never un-approve any licence that rightfully conforms to those definitions (also in cases like a grant saying “just use any OSS² licence” which is occasionally used)
Update: In the meantime, joeyh has written not one but two insightful articles (although I disagree in some details; the new licence is only to GitHub users (D.5) and GitHub (D.4) and only within their system, so, while uploaders would violate the ToS (they cannot grant the licence) and (probably) the upstream-granted copyleft licence, this would not mean that everyone else wasn’t bound by the copyleft licence in, well, enough cases to count (yes it’s possible to construct situations in which this hurts the copyleft fraction, but no, they’re nowhere near 100%).
01 March, 2017 12:00AM by MirOS Developer tg (tg@mirbsd.org)
Here is my monthly update covering what I have been doing in the free software world (previous month):
Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.
The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.
(I have been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)
This month I:
I also made the following changes to our tooling:
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.
buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.
This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:
I sponsored the following uploads:
I also performed the following QA uploads:
Finally, I made the following non-maintainer uploads:
I also filed 15 FTBFS bugs against binaryornot, chaussette, examl, ftpcopy, golang-codegangsta-cli, hiro, jarisplayer, libchado-perl, python-irc, python-stopit, python-stopit, python-stopit, python-websockets, rubocop & yash.
As a Debian FTP assistant I ACCEPTed 116 packages: autobahn-cpp, automat, bglibs, bitlbee, bmusb, bullet, case, certspotter, checkit-tiff, dash-el, dash-functional-el, debian-reference, el-x, elisp-bug-hunter, emacs-git-messenger, emacs-which-key, examl, genwqe-user, giac, golang-github-cloudflare-cfssl, golang-github-docker-goamz, golang-github-docker-libnetwork, golang-github-go-openapi-spec, golang-github-google-certificate-transparency, golang-github-karlseguin-ccache, golang-github-karlseguin-expect, golang-github-nebulouslabs-bolt, gpiozero, gsequencer, jel, libconfig-mvp-slicer-perl, libcrush, libdist-zilla-config-slicer-perl, libdist-zilla-role-pluginbundle-pluginremover-perl, libevent, libfunction-parameters-perl, libopenshot, libpod-weaver-section-generatesection-perl, libpodofo, libprelude, libprotocol-http2-perl, libscout, libsmali-1-java, libtest-abortable-perl, linux, linux-grsec, linux-signed, lockdown, lrslib, lua-curses, lua-torch-cutorch, mariadb-10.1, mini-buildd, mkchromecast, mocker-el, node-arr-exclude, node-brorand, node-buffer-xor, node-caller, node-duplexer3, node-ieee754, node-is-finite, node-lowercase-keys, node-minimalistic-assert, node-os-browserify, node-p-finally, node-parse-ms, node-plur, node-prepend-http, node-safe-buffer, node-text-table, node-time-zone, node-tty-browserify, node-widest-line, npd6, openoverlayrouter, pandoc-citeproc-preamble, pydenticon, pyicloud, pyroute2, pytest-qt, pytest-xvfb, python-biomaj3, python-canonicaljson, python-cgcloud, python-gffutils, python-h5netcdf, python-imageio, python-kaptan, python-libtmux, python-pybedtools, python-pyflow, python-scrapy, python-scrapy-djangoitem, python-signedjson, python-unpaddedbase64, python-xarray, qcumber, r-cran-urltools, radiant, repo, rmlint, ruby-googleauth, ruby-os, shutilwhich, sia, six, slimit, sphinx-celery, subuser, swarmkit, tmuxp, tpm2-tools, vine, wala & x265.
I additionally filed 8 RC bugs against packages that had incomplete debian/copyright files against: checkit-tiff, dash-el, dash-functional-el, libcrush, libopenshot, mkchromecast, pytest-qt & x265.
Here's what happened in the Reproducible Builds effort between Sunday February 19 and Saturday February 25 2017:
Christos Zaulas wrote a blog post entitled "NetBSD fully reproducible builds" which generated some discussion on Hacker News and was mentioned on DistroWatch.
Christos also reported that that NetBSD's base system is now 100.0% reproducible in our current test framework. Whilst we do less variations than in Debian here, this is highly commendable.
Microsoft requires reproducible binaries for signing shim code for secure boot.
Thanks to Ed Maste, FreeBSD base system reached 99.6% too, after he'd seen the news about NetBSD. This too has been achieved using non-default settings and is to be considered merely as a "hey we can do that too" (though Ed is slightly sad they missed 100%). The real plan is still to achieve 100% reproducibility with default settings.
Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th.
On March 23rd Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" about Reproducible Builds everywhere.
Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.
Chris Lamb:
9 package reviews have been added, 3 have been updated and 1 has been removed in this week, adding to our knowledge about identified issues.
During our reproducibility testing, the following FTBFS bugs have been detected and reported by:
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.
buildinfo.debian.net is our experiment into how to
process, store and distribute .buildinfo files after the Debian archive
software has processed them.
.buildinfo files to `buildinfo.debian.net and added notification about this failure.This week's edition was written by Chris Lamb, Ed Maste & Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.
Previously: v4.9.
Here’s a quick summary of some of the interesting security things in last week’s v4.10 release of the Linux kernel:
PAN emulation on arm64
Catalin Marinas introduced ARM64_SW_TTBR0_PAN, which is functionally the arm64 equivalent of arm’s CONFIG_CPU_SW_DOMAIN_PAN. While Privileged eXecute Never (PXN) has been available in ARM hardware for a while now, Privileged Access Never (PAN) will only be available in hardware once vendors start manufacturing ARMv8.1 or later CPUs. Right now, everything is still ARMv8.0, which left a bit of a gap in security flaw mitigations on ARM since CONFIG_CPU_SW_DOMAIN_PAN can only provide PAN coverage on ARMv7 systems, but nothing existed on ARMv8.0. This solves that problem and closes a common exploitation method for arm64 systems.
thread_info relocation on arm64
As done earlier for x86, Mark Rutland has moved thread_info off the kernel stack on arm64. With thread_info no longer on the stack, it’s more difficult for attackers to find it, which makes it harder to subvert the very sensitive addr_limit field.
linked list hardening
I added CONFIG_BUG_ON_DATA_CORRUPTION to restore the original CONFIG_DEBUG_LIST behavior that existed prior to v2.6.27 (9 years ago): if list metadata corruption is detected, the kernel refuses to perform the operation, rather than just WARNing and continuing with the corrupted operation anyway. Since linked list corruption (usually via heap overflows) are a common method for attackers to gain a write-what-where primitive, it’s important to stop the list add/del operation if the metadata is obviously corrupted.
seeding kernel RNG from UEFI
A problem for many architectures is finding a viable source of early boot entropy to initialize the kernel Random Number Generator. For x86, this is mainly solved with the RDRAND instruction. On ARM, however, the solutions continue to be very vendor-specific. As it turns out, UEFI is supposed to hide various vendor-specific things behind a common set of APIs. The EFI_RNG_PROTOCOL call is designed to provide entropy, but it can’t be called when the kernel is running. To get entropy into the kernel, Ard Biesheuvel created a UEFI config table (LINUX_EFI_RANDOM_SEED_TABLE_GUID) that is populated during the UEFI boot stub and fed into the kernel entropy pool during early boot.
arm64 W^X detection
As done earlier for x86, Laura Abbott implemented CONFIG_DEBUG_WX on arm64. Now any dangerous arm64 kernel memory protections will be loudly reported at boot time.
64-bit get_user() zeroing fix on arm
While the fix itself is pretty minor, I like that this bug was found through a combined improvement to the usercopy test code in lib/test_user_copy.c. Hoeun Ryu added zeroing-on-failure testing, and I expanded the get_user()/put_user() tests to include all sizes. Neither improvement alone would have found the ARM bug, but together they uncovered a typo in a corner case.
no-new-privs visible in /proc/$pid/status
This is a tiny change, but I like being able to introspect processes externally. Prior to this, I wasn’t able to trivially answer the question “is that process setting the no-new-privs flag?” To address this, I exposed the flag in /proc/$pid/status, as NoNewPrivs.
That’s all for now! Please let me know if you saw anything else you think needs to be called out. :) I’m already excited about the v4.11 merge window opening…
© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
28 February, 2017 06:31AM by kees
Once again, I'm making an announcement mainly for my local circle of friends and (gasp!) followers. For those of you over 100Km away from Mexico City, please disregard this message.
Back in July 2015, and after two years of hard work, my university finished the publishing step of my second book. This is a textbook for the subject I teach at Computer Engineering: Operating Systems Fundamentals.
The book is, from its inception, fully available online under a permissive (CC-BY) license. One of the books aimed contributions is to present a text natively written in Spanish. Besides, our goal (I coordinated a team of authors, working with two colleagues from Rosario, Argentina, and one from Cauca, Colombia) was to provide a book students can easily and legally share with no legal issues.
I have got many good reviews so far, and after teaching based on it for four years (while working on it and after its publication), I can attest the material is light enough to fit in a Bachelors level degree, while it's deep enough to make our students sweat healthily ;-)
Anyway: I have been scheduled to present the book at my university's main book show, 38 Feria Internacional del Libro del Palacio de Minería this Saturday, 2017.03.04 16:00; Salón Manuel Tolsá. What's even better: This time, I won't be preparing a speech! The book will be presented by my two very good friends, José María Serralde and Rolando Cedillo. Both of them are clever, witty, fun, and a real honor to work with. Of course, having them present our book is more than a double honor.
So, everybody who can make it: FIL Minería is always great and fun. Come share the love! Come have a book! Or, at least, have a good time and a nice chat with us!
28 February, 2017 05:21AM by gwolf
Working with 9-Patch Images, Adapter Classes, Layouts in Android.
Before starting this new task I never wondered ..”How does that bubble around our chat messages wraps around the width of the text written by us??”.
The image being used as the background of our messages are called 9-Patch images.
They stretch themselves according to the text length and font size!
Android will automatically resize to accommodate the contents , like–

Source- developer.android.com
How great it would be if the clothes we wear could also work the same way.
Fit according to the body-size. I could then still wear my childhood cute nostalgic dresses..
Below, are the 9-Patch image I edited. There are two set of bubble images which are different for incoming and outgoing SIP messages.

These images have to be designed a certain way and should be stored as the smallest size and leave 1px to all sides. Details are clearly explained in Android Documentation–
https://developer.android.com/guide/topics/graphics/2d-graphics.html#nine-patch
Then, save the image by concatenating “.9” between the file name and extension.
For example if your image name is bubble.png. Rename it to bubble.9.png
They should be stored like any other image file in res/drawable folder.
Using 9-patch images these problems are taken care of–
I had to modify the existing Lumicall SIP Message screen which had simple ListView as the chat message holder and replace it with 9-patch bubble images to make it more interactive 
Voila! What a simple way to provide a simple yet valuable usability feature.
28 February, 2017 04:46AM by urvikagola
I worked on creating the whiptail and corresponding gpg scripts for 4 options for primary and/or secondary/subkey generation.
1) A “Quick” Generate Primary and Secondary Key task that only asks the user for the UID and password and creates an rsa4096 primary key, an rsa2048 secondary key and an rsa2048 laptop signing subkey.
2) A “Custom” Generate Primary and Secondary Key task that gives the user more flexibility in algo, usage and expiry, but still adheres to PGP best practices. For the primary key, the user chooses between rsa4096 key or an ECC curve, sign/cert or cert only for usage, and the expiry. For the secondary encryption key, the user also chooses between RSA and ECC, but can choose a key length between 2048 and 4096, and the expiry.
3) Generate Primary Key Only: Same as the primary key generation for #2
4) Generate a Custom Subkey: The user gets to choose between rsa<2048-4096>, dsa<2048-3072>, elg<2048-4096>, and an ECC curve, and choose the usage and expiry. The tricky part was making sure that the usage matched the algorithm. For example, DSA is only capable of sign and auth, while RSA can do sign, auth, and encrypt. ECC curves are capable of all usages, however, encrypt cannot overlap with sign/auth for any curve, even though the name of the curve is the same. So I used radio and checkboxes to make it as easy as possible for the user.
These options follow the best practices outlined at riseup and Debian Wiki pages, such as:
The primary key should use a strong algorithm and should only have the usages cert and/or sign.
Subkeys can be 2048-4096 bits, preferably RSA, DSA-2 or ECC.
UID shouldn’t ask for a comment
DSA-1024 is deprecated so I restricted DSA to a minimum of 2048.
git-annex has never used SHA1 by default. But, there are concerns about SHA1 collisions being used to exploit git repositories in various ways. Since git-annex builds on top of git, it inherits its foundational SHA1 weaknesses. Or does it?
Interestingly, when I dug into the details, I found a way to make git-annex repositories secure from SHA1 collision attacks, as long as signed commits are used (and verified).
When git commits are signed (and verified), SHA1 collisions in commits are not a problem. And there seems to be no way to generate usefully colliding git tree objects (unless they contain really ugly binary filenames). That leaves blob objects, and when using git-annex, those are git-annex key names, which can be secured from being a vector for SHA1 collision attacks.
This needed some work on git-annex, which is now done, so look for a release in the next day or two that hardens it against SHA1 collision attacks. For details about how to use it, and more about why it avoids git's SHA1 weaknesses, see https://git-annex.branchable.com/tips/using_signed_git_commits/.
My advice is, if you are using a git repository to publish or collaborate on binary files, in which it's easy to hide SHA1 collisions, you should switch to using git-annex and signed commits.
PS: Of course, verifying gpg signatures on signed commits adds some complexity and won't always be done. It turns out that the current SHA1 known-prefix collision attack cannot be usefully used to generate colliding commit objects, although a future common-prefix collision attack might. So, even if users don't verify signed commits, I believe that repositories using git-annex for binary files will be as secure as git repositories containing binary files used to be. How-ever secure that might be..
Following the post about 10-bit Y'CbCr earlier this week, I thought I'd make an actual test of 10-bit H.264 compression for live streaming. The basic question is; sure, it's better-per-bit, but it's also slower, so it is better-per-MHz? This is largely inspired by Ronald Bultje's post about streaming performance, where he largely showed that HEVC is currently useless for live streaming from software; unless you can encode at x264's “veryslow” preset (which, at 720p60, means basically rather simple content and 20 cores or so), the best x265 presets you can afford will give you worse quality than the best x264 presets you can afford. My results will maybe not be as scientific, but hopefully still enlightening.
I used the same test clip as Ronald, namely a two-minute clip of Tears of Steel. Note that this is an 8-bit input, so we're not testing the effects of 10-bit input; it's just testing the increased internal precision in the codec. Since my focus is practical streaming, I ran the last version of x264 at four threads (a typical desktop machine), using one-pass encoding at 4000 kbit/sec. Nageru's speed control has 26 presets to choose from, which gives pretty smooth steps between neighboring ones, but I've been sticking to the ten standard x264 presets (ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo). Here's the graph:

The x-axis is seconds used for the encode (note the logarithmic scale; placebo takes 200–250 times as long as ultrafast). The y-axis is SSIM dB, so up and to the left is better. The blue line is 8-bit, and the red line is 10-bit. (I ran most encodes five times and averaged the results, but it doesn't really matter, due to the logarithmic scale.)
The results are actually much stronger than I assumed; if you run on (8-bit) ultrafast or superfast, you should stay with 8-bit, but from there on, 10-bit is on the Pareto frontier. Actually, 10-bit veryfast (18.187 dB) is better than 8-bit medium (18.111 dB), while being four times as fast!
But not all of us have a relation to dB quality, so I chose to also do a test that maybe is a bit more intuitive, centered around bitrate needed for constant quality. I locked quality to 18 dBm, ie., for each preset, I adjusted the bitrate until the SSIM showed 18.000 dB plus/minus 0.001 dB. (Note that this means faster presets get less of a speed advantage, because they need higher bitrate, which means more time spent entropy coding.) Then I measured the encoding time (again five times) and graphed the results:

x-axis is again seconds, and y-axis is bitrate needed in kbit/sec, so lower and to the left is better. Blue is again 8-bit and red is again 10-bit.
If the previous graph was enough to make me intrigued, this is enough to make me excited. In general, 10-bit gives 20-30% lower bitrate for the same quality and CPU usage! (Compare this with the supposed “up to 50%“ benefits of HEVC over H.264, given infinite CPU usage.) The most dramatic example is when comparing the “medium” presets directly, where 10-bit runs at 2648 kbit/sec versus 3715 kbit/sec (29% lower bitrate!) and is only 5% slower. As one progresses towards the slower presets, the gap is somewhat narrowed (placebo is 27% slower and “only” 24% lower bitrate), but in the realistic middle range, the difference is quite marked. If you run 3 Mbit/sec at 10-bit, you get the quality of 4 Mbit/sec at 8-bit.
So is 10-bit H.264 a no-brainer? Unfortunately, no; the client hardware support is nearly nil. Not even Skylake, which can do 10-bit HEVC encoding in hardware (and 10-bit VP9 decoding), can do 10-bit H.264 decoding in hardware. Worse still, mobile chipsets generally don't support it. There are rumors that iPhone 6s supports it, but these are unconfirmed; some Android chips support it, but most don't.
I guess this explains a lot of the limited uptake; since it's in some ways a new codec, implementers are more keen to get the full benefits of HEVC instead (even though the licensing situation is really icky). The only ones I know that have really picked it up as a distribution format is the anime scene, and they're feeling quite specific pains due to unique content (large gradients giving pronounced banding in undithered 8-bit).
So, 10-bit H.264: It's awesome, but you can't have it. Sorry :-)
February 2017 was my sixth month as a Debian LTS team member. I was allocated 5 hours and had 9,75 hours left over from January 2017. This makes a total of 14,75 hours. I spent all of them doing the following:
Non-profits that provide project support have proven themselves to be necessary for the success and advancement of individual projects and Free Software as a whole. The Free Software Foundation (founded in 1985) serves as a home to GNU projects and a canonical list of Free Software licenses. The Open Source Initiative came about in 1998, maintaining the Open Source Definition, based on the Debian Free Software Guidelines, with affiliate members including Debian, Mozilla, and the Wikimedia Foundation. Software in the Public Interest (SPI) was created in the late 90s largely to act as a fiscal sponsor for projects like Debian, enabling it to do things like accept donations and handle other financial transactions.
More recently (2006), the Software Freedom Conservancy was formed. Among other activities—like serving as a fiscal sponsor, infrastructure provider, and support organization for a number of free software projects including Git, Outreachy, and the Debian Copyright Aggregation Project—they protect user freedom via copyleft compliance and GPL enforcement work. Without a willingness to act when licenses are violated, copyleft has no power. Through communication, collaboration, and—only as last resort—litigation, the Conservancy helps everyone who uses a freedom respecting license.
The Conservancy has been aggressively fundraising in order to not just continue its current operations, but expand their work, staff, and efforts. They recently launched a donation matching campaign thanks to the generosity and dedication of an anonymous donor. Everyone who joins the Conservancy as a annual Supporter by February 28th will have their donation matched.
A number of us are already supporters, and hope you will join us in supporting the world of an organization that supports us.
systemd 233 is scheduled to be released next week, and there is only
a handful of small issues left.
As usual there are tons of improvements and fixes,
but the most intrusive one probably is another attempt to move from legacy cgroup v1
to a “hybrid” setup where the new unified (cgroup v2) hierarchy is mounted at
/sys/fs/cgroup/unified/ and the legacy one stays at /sys/fs/cgroup/ as
usual. This should provide an easier path for software like Docker or LXC to
migrate to the unified hiearchy, but even that hybrid mode broke some bits.
While systemd 233 will not make it into Debian stretch or Ubuntu zesty, as both are in feature freeze, it will soon be available in Debian experimental, and in the next Ubuntu release after 17.04 gets released. Thus now is another good time to give this some thorough testing!
To help with this, please give the PPA with builds from upstream master a spin. In addition to the usual packages for Ubuntu 16.10 I also uploaded a build for Ubuntu zesty, and a build for Debian stretch (aka testing) which also works on Debian sid. You can use that URL as an apt source:
deb [trusted=yes] https://people.debian.org/~mpitt/tmp/systemd-master-20170225/ /
These packages pass our autopkgtests and I tested them manually too. LXC and LXD work fine, docker.io/runc needs a fix which I uploaded to Ubuntu zesty. (It’s not yet available in Debian, sorry.)
Please file reports about regressions on GitHub, but please also le me know about successes on my Google+ page so that we can get some idea about how many people tested this.
Thank you, and happy booting!
Very strange. Verrrry strange.
Yesterday I wrote a blog post on spam stuff that has been hitting my mailbox. Nothing too deep, just me scratching my head.
Coincidentally (I guess/hope), I have been getting messages via my Bitlbee to one of my Jabber accounts, offering me ransomware services. I am reproducing it here, omitting of course everything I can recognize as their brand names related URLs (as I'm not going to promote the 3vi1-doers). I'm reproducing this whole as I'm sure the information will be interesting for some.
*BRAND* Ransomware - The Most Advanced and Customisable you've Ever Seen Conquer your Independence with *BRAND* Ransomware Full Lifetime License! * UNIQUE FEATURES * NO DEPENDENCIES (.net or whatever)!!! * Edit file Icon and UAC - Works on All Windows Versions * Set Folders and Extensions to Encrypt, Deadline and Russian Roulette * Edit the Text, speak with voice (multilang) and Colors for Ransom Window * Enable/disable USB infect, network spread & file melt * Set Process Name, sleep time, update ransom amount, Give mercy button * Full-featured headquarter (for Windows) with unlimited builds, PDF reports, charts and maps, totally autonomous operation * PHP Bridges instead of expensive C&C servers! * Automatic Bitcoin payment detection (impossible to bypass/crack - we challege who says the contrary to prove what they say!) * Totally/Mathematically IMPOSSIBLE to DECRYPT! Period. * Award-Winning Five-Stars support and constant updates! * We Have lot vouchs in *BRAND* Market, can check! Watch the promo video: *URL* Screenshots: *URL* Website: *URL* Price: $389 Promo: just $309 - 20% OFF! until 25th Feb 2017 Jabber: *JID*
I think I can comment on this with my students. Hopefully, this is interesting to others.
Now... I had never received Jabber-spam before. This message has been sent to me 14 times in the last 24 hours (all from different JIDs, all unknown to me). I hope this does not last forever :-/ Otherwise, I will have to learn more on how to configure Bitlbee to ignore contacts not known to me. Grrr...
24 February, 2017 07:06PM by gwolf
One of the products I have done some work on at Red Hat has recently been released to customers and there have been a few things written about it:
जीवन का सत्य, शमशान।
शिव का है स्थान।
काली का तांडव नृत्य।
शिव का करे अभिनन्दन।
24 February, 2017 02:43PM by Ritesh Raj Sarraf
Must be the irony of life that I was about to give up the TclCurl Debian package some time ago, and now I'm using it again for some very old and horrible web scraping code.
The world moved on to https but the Tcl http package only supports unencrypted http. You can combine it with the tls package as explained in the Wiki, but that seems to be overly complicated compared to just loading the TclCurl binding and moving on with something like this:
package require TclCurl
# download to a variable
curl::transfer -url https://sven.stormbind.net -bodyvar page
# or store it in a file
curl::transfer -url https://sven.stormbind.net -file page.html
Now the remaining problem is that the code is unmaintained upstream and there is one codebase on bitbucket and one on github. While I fed patches to the bitbucket repo and thus based the Debian package on that repo, the github repo diverted in a different direction.