August 26, 2016

hackergotchi for Clint Adams

Clint Adams

Any way the wind blows

NOAA decommissioning weather.noaa.gov led to breakage which reveals just how much duplication of code and effort there is for fetching and parsing weather data.

26 August, 2016 04:21PM

hackergotchi for Norbert Preining

Norbert Preining

ESSLLI 2016 Course: Algebraic Specification and Verification with CafeOBJ

During this year’s ESSLLI (European Summer School in Logic, Language and Information) I was teaching a course on Algebraic Specification and Verification with CafeOBJ. Now that the course is over I can relax a bit and enjoy the beauty of the surrounding North Tyrol.
ESSLLI-2016-logo-darkblue-small

For those who couldn’t attend the ESSLLI or the course, here are the course materials:

Thanks to the participants for the interesting questions and participation.

What follows is the original description of the course.

Abstract

Ensuring correctness of complex systems like computer programs or communication protocols is gaining ever increasing importance. For correctness it does not suffice to consider the finished system, but correctness already on the level of specification is necessary, that is, before the actual implementation starts. To this aim, algebraic specification and verification languages have been conceived. They are aiming at a mathematically correct description of the properties and behavior of the systems under discussion, and in addition often allow to prove (verify) correctness within the same system.

This course will give an introduction to algebraic specification, its history, logic roots, and actuality with respect to current developments. Paired with the theoretical background we give an introduction to CafeOBJ, a programming language that allows both specification and verification to be carried out together. The course should enable students to understand the theoretical background of algebraic specification, as well as enable them to read and write basic specifications in CafeOBJ.

Description

CafeOBJ is a specification language based on three-way extensions to many-sorted equational logic: the underlying logic is order-sorted, not just many-sorted; it admits unidirectional transitions, as well as equations; it also accommodates hidden sorts, on top of ordinary, visible sorts. A subset of CafeOBJ is executable, where the operational semantics is given by a conditional order-sorted term rewriting system.

The language system CafeOBJ has been under constant development at the institute of the lecturers since the late 80ies. It is closely related to other algebraic specification languages in the OBJ family, including Maude. The CafeOBJ language and the range of verification methods and tools it supports – including its support for inductive theorem proving, verification of behavioral specifications, deductive invariant proof, and reachability analysis of concurrent systems – has played a key role in both extending and bringing algebraic specification techniques into contact with many software engineering applications.

The following topics will be discussed:

  • algebraic foundations: many-sorted algebras, order-sorted algebras, behavioral specification
  • computational foundations: rewriting
  • programming with CafeOBJ: language elements, modules, simple programs
  • CloudSync: presentation of an example cloud synchronization protocol and its verification

To make the lectures not too `heavy’, we will structure each lecture into two parts: A first part providing an introduction of some theoretical concept, or framework, and a second part dealing with actual programming and implementation. Especially for the second part of each lecture students are encouraged to use their laptops to try out code and experiment.

26 August, 2016 09:35AM by Norbert Preining

hackergotchi for Michal Čihař

Michal Čihař

CI coverage from Windows, Linux and OSX

Once I got CI working on multiple platforms the obvious next step was to be able to aggregate coverage reports across them. This should not be that hard, right? Well I've spent couple of hours on that during last few days.

On Linux and OSX it was pretty much straightforward. Both GCC and Clang do support coverage, so it's just matter of configuring them properly and collect the coverage reports. I've used own solution for that in past and that was really far from working well (somehow I never managed to get coverage fully uploaded to Codecov). Fortunately there exists CMake script called CMake-codecov which does all needed work and works out of the box on GCC and Clang (even on OSX). Well it works on Travis only once you update the compilers and install llvm-cov tool.

The Windows part on AppVeyor was much harder for me. This can be heavily accounted to lack of my experience with Windows and especially development on Windows in past ten years (probably even more). First challenge was to find something what can generate code coverage there.

After lot of googling I've settled down on OpenCppCoverage what seems to be the only free solution I was able to find. The good thing is that it can generate coverage in Cobertura format that Codecov undestands. There are also bad things that I've learned. First of all it's quite hard to integrate this with CTest. There is no support for wrapping test calls in custom commands, so I've misused the memory checks for that purpose. I've written small python script which pretends the valgrind interface and does call OpenCppCoverage in the background.

Now I had around 800 coverage files (one for each test case) and we need to deal with them somehow. The Codeconv command line client doesn't deal wit this out of the box so the obvious choice was to merge them before upload. There even seems to be script doing that, but unfortunately trying that on our coverage data make it nowhere near completion within hour, so that's not really good choice. Second thing I've tried was merging binary coverage in OpenCppCoverage and then exporting to Cobertura format. Obviously Gammu is special project as all I got from this attempt was crashing OpenCppCoverage (it did merge some of the coverages, but it failed in the end without indicating any error).

In the end I've settled down to uploading files in chunks to Codecov. This seems to work quite okay, though is a bit slow, mostly due to way how Codecov bash uploader prepares data to upload (but this will be hopefully fixed soon).

Anyway the goal has been reached, both Windows and Linux code shows in coverage reports.

Filed under: Debian English Gammu | 0 comments

26 August, 2016 04:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.400.2.0

armadillo image

Another Armadillo 7.* release -- now at 7.400. We skipped the 7.300.* serie release as it came too soon after our most recent CRAN release. Releasing RcppArmadillo 0.7.400.2.0 now keeps us at the (roughly monthly) cadence which works as a good compromise between getting updates out at Conrad's sometimes frantic pace, while keeping CRAN (and Debian) uploads to about once per month.

So we may continue the pattern of helping Conrad with thorough regression tests by building against all (by now 253 (!!)) CRAN dependencies, but keeping release at the GitHub repo and only uploading to CRAN at most once a month.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

The new upstream release adds new more helper functions. Detailed changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.400.2.0 (2016-08-24)

  • Upgraded to Armadillo release 7.400.2 (Feral Winter Deluxe)

    • added expmat_sym(), logmat_sympd(), sqrtmat_sympd()

    • added .replace()

Changes in RcppArmadillo version 0.7.300.1.0 (2016-07-30)

  • Upgraded to Armadillo release 7.300.1

    • added index_min() and index_max() standalone functions

    • expanded .subvec() to accept size() arguments

    • more robust handling of non-square matrices by lu()

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 August, 2016 01:03AM

hackergotchi for Matthew Garrett

Matthew Garrett

Priorities in security

I read this tweet a couple of weeks ago:

and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comment count unavailable comments

26 August, 2016 12:02AM

August 25, 2016

hackergotchi for Alessio Treglia

Alessio Treglia

The Breath of Time

 

For centuries man has hunted, he brought the animals to pasture, cultivated fields and sailed the seas without any kind of tool to measure time. Back then, the time was not measured, but only estimated with vague approximation and its pace was enough to dictate the steps of the day and the life of man. Subsequently, for many centuries, hourglasses accompanied the civilization with the slow flow of their sand grains. About hourglasses, Ernst Junger writes in “Das Sanduhrbuch – 1954” (no English translation): “This small mountain, formed by all the moments lost that fell on each other, it could be understood as a comforting sign that the time disappears but does not fade. It grows in depth”.

For the philosophers of ancient Greece, the time was just a way to measure how things move in everyday life and in any case there was a clear distinction between “quantitative” time (Kronos) and “qualitative” time (Kairòs). According to Parmenides, time is guise, because its existence…

<Read More…[by Fabio Marzocca]>

25 August, 2016 04:33PM by Fabio Marzocca

Joerg Jaspert

New gnupg-agent in Debian

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful

sign_and_send_pubkey: signing failed: agent refused operation

This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple

gpg-connect-agent updatestartuptty /bye

updates that and voila, ssh agent functionality is back in.

Note: This assumes you have “enable-ssh-support” in your ~/.gnupg/gpg-agent.conf

25 August, 2016 09:55AM

hackergotchi for Norbert Preining

Norbert Preining

Gaming: Deus Ex Go

Long flights and lazy afternoons relaxing from teaching, I tried out another game on my Android device, Deus Ex Go. It is a turn based game in the style of the Deus Ex series (long years ago I was beta tester for the Deus Ex version of LGP). A turn based game where you have to pass through a series of levels, each one consisting of an hexagonal grid with an entry and exit point, and some nasty villains or machines trying to kill you.

Deus-ex-go

Without any explanations given you are thrown into the game and it takes a few iterations until you understand what kind of attacks you are facing, but once you have figured that out, it is a more or less simple game of combination how to manage to get to the exit. I played through the around 50 levels of the story mode and I think it was only in the last five that I once or twice had to actually try and think hard to find a solution.

I found the game quite amusing at the beginning, but soon it became repetitive. But since you can play through the whole story mode in probably one long afternoon, that is not so much of a problem. More a problem is the apparently incredible battery usage of this game. Playing without checking for some time leaves you soon with a near empty battery.

Graphically well done, with more or less interesting gameplay, it still does not stand up to Monument Valley.

25 August, 2016 09:34AM by Norbert Preining

hackergotchi for Francois Marier

Francois Marier

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

25 August, 2016 05:00AM

August 24, 2016

hackergotchi for Don Armstrong

Don Armstrong

H3ABioNet Hackathon (Workflows)

I'm in Pretoria, South Africa at the H3ABioNet hackathon which is developing workflows for Illumina chip genotyping, imputation, 16S rRNA sequencing, and population structure/association testing. Currently, I'm working with the imputation stream and we're using Nextflow to deploy an IMPUTE-based imputation workflow with Docker and NCSA's openstack-based cloud (Nebula) underneath.

The OpenStack command line clients (nova and cinder) seem to be pretty usable to automate bringing up a fleet of VMs and the cloud-init package which is present in the images makes configuring the images pretty simple.

Now if I just knew of a better shared object store which was supported by Nextflow in OpenStack besides mounting an NFS share, things would be better.

You can follow our progress in our git repo: [https://github.com/h3abionet/chipimputation]

24 August, 2016 02:40PM

Zlatan Todorić

Take that boredom

While I was bored on Defcon, I took the smallest VPS in DO offering (512MB RAM, 20GB disk), configured nginx on it, bought domain zlatan.tech and cp'ed my blog data to blog.zlatan.tech. I thought it will just be out of boredom and tear it apart in a day or two but it is still there.

Not only that, the droplet came with Debian 8.5 but I just added unstable and experimental to it and upgraded. Just to experiment and see what time will I need to break it. To make it even more adventurous (and also force me to not take it too much serious, at least at this point) I did something on what Lars would scream - I did not enable backups!

While having fun with it I added letsencrypt certificate to it (wow, that was quite easy).

Then I installed and configured Tor. Ende up adding an .onion domain for it! It is: pvgbzphm622hv4bo.onion

My main blog is still going to be zgrimshell.github.io (for now at least) where I push my Nikola (static site generator written in python) generated content as git commits. To my other two domains (on my server) I just rsync the content now. Simple and efficient.

I must admit I like my blog layout. It is simple, easy to read, efficient and fast, I don't bother with comments and writing a blog in markdown (inside terminal as all good behaving hacker citizen) while compiling it with Nikola is breeze (and yes, I did choose Nikola because of Nikola Tesla and python). Also I must admit that nginx is pretty nice webserver, no need to explain the beauty of git but I can't recommend enough of rsync.

If anyone is interested in doing the same I am happy to talk about it but these tools are really simple (as I enjoy simple things and by simple I mean small tools, no complicated configs and easy execution).

24 August, 2016 05:45AM by Zlatan Todoric

August 23, 2016

Reproducible builds folks

Reproducible Builds: week 69 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday August 14 and Saturday August 20 2016:

Fasten your seatbelts

Important note: we enabled build path variation for unstable now, so your package(s) might become unreproducible, while previously it was said to be reproducible… given a specific build path it probably still is reproducible but read on for the details below in the tests.reproducible-builds.org section! As said many times: this is still research and we are working to make it reality.

Media coverage

Daniel Stender blogged about python packaging and explained some caveats regarding reproducible builds.

Toolchain developments

Thomas Schmitt uploaded xorriso which now obeys SOURCE_DATE_EPOCH. As stated in its man pages:

ENVIRONMENT
[...]
SOURCE_DATE_EPOCH  belongs to the specs of reproducible-builds.org.  It
is supposed to be either undefined or to contain a decimal number which
tells the seconds since january 1st 1970. If it contains a number, then
it is used as time value to set the  default  of  --modification-date=,
--gpt_disk_guid,  and  --set_all_file_dates.  Startup files and program
options can override the effect of SOURCE_DATE_EPOCH.

Packages reviewed and fixed, and bugs filed

The following packages have become reproducible after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • vulkan/1.0.21.0+dfsg1-1 by Timo Aaltonen.

The following 2 packages were not changed, but have become reproducible due to changes in their build-dependencies: tagsoup tclx8.4.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Bug tracker house keeping:

  • Chris Lamb pinged 164 bugs he filed more than 90 days ago which have a patch and had no maintainer reaction.

Reviews of unreproducible packages

55 package reviews have been added, 161 have been updated and 136 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (16)
  • Santiago Vila (2)

diffoscope development

Chris Lamb, Holger Levsen and Mattia Rizzolo worked on diffoscope this week.

Improvements were made to SquashFS and JSON comparison, the https://try.diffoscope.org/ web service, documentation, packaging, and general code quality.

diffoscope 57, 58, and 59 were uploaded to unstable by Chris Lamb. Versions 57 and 58 were both broken, so Holger set up a job on jenkins.debian.net to test diffoscope on each git commit. He also wrote a CONTRIBUTING document to help prevent this from happening in future.

From these efforts, we were also able to learn that diffoscope is now reproducible even when built across multiple architectures:

< h01ger> | https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope.html shows these packages were built on amd64:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on i386:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on armhf:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb

And those also match the binaries uploaded by Chris in his diffoscope 59 binary upload to ftp.debian.org, yay! Eating our own dogfood and enjoying it!

tests.reproducible-builds.org

Debian related:

  • show percentage of results in the last 24/48h (h01ger)
  • switch python database backend to SQLAlchemy (Valerie)
  • vary build path varitation for unstable and experimental for all architectures. (h01ger)

The last change probably will have an impact you will see: your package might become unreproducible in unstable and this will be shown on tracker.debian.org, while it will still be reproducible in testing.

We've done this, because we think reproducible builds are possible with arbitrary build paths. But: we don't think those are a realistic goal for stretch, where we still recommend to use ´.buildinfo´ to record the build patch and then do rebuilds using that path.

We are doing this, because besides doing theoretical groundwork we also have a practical goal: enable users to independently verify builds. And if they only can do this with a fixed path, so be it. For now :)

To be clear: for Stretch we recommend that reproducible builds are done in the same build path as the "original" build.

Finally, and just for our future references, when we enabled build path variation on Saturday, August 20th 2016, the numbers for unstable were:

suite all reproducible unreproducible ftbfs depwait not for this arch blacklisted
unstable/amd64 24693 21794 (88.2%) 1753 (7.1%) 972 (3.9%) 65 (0.2%) 95 (0.3%) 10 (0.0%)
unstable/i386 24693 21182 (85.7%) 2349 (9.5%) 972 (3.9%) 76 (0.3%) 103 (0.4%) 10 (0.0%)
unstable/armhf 24693 20889 (84.6%) 2050 (8.3%) 1126 (4.5%) 199 (0.8%) 296 (1.1%) 129 (0.5%)

Misc.

Ximin Luo updated our git setup scripts to make it easier for people to write proper descriptions for our repositories.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

23 August, 2016 12:56PM

August 22, 2016

hackergotchi for Sylvain Le Gall

Sylvain Le Gall

Release of OASIS 0.4.7

I am happy to announce the release of OASIS v0.4.7.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlib_extra_files (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

22 August, 2016 11:51PM by gildor

Luciano Prestes Cavalcanti

AppRecommender - Last GSoC Report

My work on Google Summer of Code is to create a new strategy on AppRecommender, where this strategy should be able to get a referenced package, or a list of referenced packages, then analyze the packages that the user has already installed and make a recommendation using the referenced packages as a base, for example: if the user runs "$ sudo apt install vim", the AppRecommender uses "vim" as the referenced package, and should recommend packages with relation between "vim" and the other packages that the user has installed. This work is done and added to the official AppRecommender repository.
 
During the GSoC program, more contributions were done with the AppRecommender project helping the system to improve the recommendations, installation and configurations to help Debian package.
 
The following link contains my commits on AppRecommender:
 
During the period destined to students get to know the community of the project, I talked with the Debian community about my project to get feedback and ideas. When talking to the Debian community on the IRC channels, we came up with the idea of using the popularity-contest data to improve the recommendations. I talked with my mentors, who approved the idea, then we increased the project scope to use the popularity-contest data to improve the AppRecommender recommendations.
 
The popularity-contest has several privacy political terms, then we did a research and published, on the Debian Planeta post that explains why we need the popularity-contest data to improve the recommendations and how we use this data. This post also contains an explanation about the risks and the measures taken to minimize them.
 
Then two activities were added to be made. One of them is to create a script to be added on popularity-contestThis script is destined to get the popularity-contest data, which is the users' packages, and generate clusters that group these packages analyzing similar users. The other activity is to add collaborative data into the AppRecommender, where this will download the clusters data and use it to improve the recommendations.
 
The popularity-contest cluster script was done and reviewed by my mentor, but was not integrated into popularity-contest yetThe usage of clusters data into AppRecommender has been already implemented, but still not added on official repository because it is waiting the cluster cript's acceptance into the popularity-contest. This work is not complete, but I will continue working with AppRecommender and Debian community, and with my mentorshelp, I will finish this work.
 
The following link contains my commits on repository with the popularity-contest cluster script's feature, as well as other scripts that I used to improve my work, but the only script that will be sent to popularity-contest is the create_popcon_clusters.py:
 
The following link contains my commits on repository with the AppRecommender collaborative data feature: 
 
Google Drive folder with the patch:

22 August, 2016 04:54PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Linux 25 jubilee symposium

I gave a talk about the early days of Linux at the jubilee symposium arranged by the University of Helsinki CS department. Below is an outline of what I meant to speak about, but the actual talk didn't follow it exactly. You can compare these to the video once it comes online.

  • Linus and I met at uni, the only 2 Swedish speaking new students that year, so we naturally migrated towards each other.
  • After a year away for military service, got back in touch, summer of
    1. .
  • C & Unix course fall of 1990; Minix.
  • Linus didn't think atime updates in real time were plausible, but I showed him; funnily enough, atime updates have been an issue in Linux until fairly recently, since they slow things down (without being particularly useful)
  • Jan 5, 1991 bought his first PC (i386 + i387 + 4 MiB RAM and a small hard disk); he had a Sinclair QL before that.
  • Played Prince of Persia for a couple of months.
  • Then wanted to learn i386 assembly and multitasking.
  • A/B threading demo.
  • Terminal emulation, Usenet access from home.
  • Hard disk driver, mistaking hard disk for a modem.
  • More ambition, announced Linux to the world for the first time
  • first ever Linux installation.
  • Upload to ftp.funet.fi, directory name by Ari Lemmke.
  • Originally not free software, licence changed early 1992.
  • First mailing list was created and introduced me to a flood of email (managed with VAX/VMS MAIL and later mush on Unix).
  • I talked a lot with Linus about design at this time, but never really participated in the kernel work (partly because disagreeing with Linus is a high-stress thing).
  • However, I did write the first sprintf for the kernel, since Linus hadn't learnt about varargs functions in C; he then ruined it and added the comment "Wirzenius wrote this portably..." (add google hit count for wirzenius+fucked).
  • During 1992 Linux grew fast, and distros happened, and a lot of packaging and porting of software; porting was easier because Linus was happy to add/change things in the kernel to accomodate software
  • A lot of new users during 1992 as well.
  • End of 1992 I and a few others founded the Linux Documentation Project to help all the new users, some of who didn't come from a Unix background.
  • In fact, things progressed so fast in 1992 that Linus thought he'd release 1.0 very soon, resulting in a silly sequence of version numbers: 0.12, 0.95, 0.96, 0.96b, 0.96c, 0.96c++2.
  • X server ported to Linux; almost immediate prediction of the year of the Linux desktop never happening unless ALL the graphics cards were supported immediately.
  • Linus was of the opinion that you needed one process (not thread) per window in X; I taught him event driven programming.
  • Bug in network code, resulting in ban on uni network.
  • Pranks in the shared office room.
  • We released 1.0 in an event at the CS dept in March, 1994; this included some talks and a ritual compilation of the release version during the event.

22 August, 2016 03:03PM

Satyam Zode

Google Summer of Code 2016 : Final Report

Project Title : Improving diffoscope tool and reproducibility of Debian packages

Project details

This project aims to improve diffoscope tool and fix Debian packages which are unreproducible in Reproducible builds testing framework. diffoscope recursively unpack archives of many kinds and transform various binary formats into more human readable form to compare them. As a part of this project I worked on argument completion feature and ignoring .buildinfo feature. This project is a part of Reproducible Builds effort

Mentor and Co-Mentor

  • Jérémy Bobbio (Lunar) : Mentor
  • Reiner Herrmann (deki) : Co-Mentor
  • Holger Levsen (h01ger) : Co-Mentor
  • Mattia Rizzolo (mapreri) : Co-Mentor

Project Discussion

  • Introduction to Reproducible Builds in Debian

    First time I came to know about Reproducible Builds was during Debconf 2015. I started to get involve from the start of March 2016. At the beginning Lunar suggested me to watch the talks given on Reproducible Builds wiki. I read documentation on Reproducible Builds site and started to participate in IRC discussions on #debian-reproducible.

  • Application Review Period

    During proposal discussion period we discussed the areas where work needs to be done. I wrote the proposal and got it reviewed by community on the mailing list. Simultaneously, I worked on bug #818111 and submitted patch for same. That not only helped me to understand the concept of Reproducible Builds but also helped me to setup testing environment required to check the reproducibility of Debian packages.

  • Community Bonding Period

    During community bonding period I studied the codebase of diffoscope and also spent enough amount of time for learning Python3 metaprogramming and other OOP concepts. We also discussed more about hiding differences and options for same. I couldn’t finish my project research work during this period since I had exams in May 2016 and it consumed almost half of community bonding period and week 1 of coding period.

Project Implementation

Challenges and Work Left

  • To understand the main purpose of diffoscope in the context of Reproducible Builds. I had to go through complete Reproducible Builds project. It consumed significant amount of time to understand what Reproducible Builds is, why it’s necessary important for Free software to build reproducibly. Diffoscope is the last tool in Reproducible Builds toolchain. It was a big challenge for me to understand whole process and objective of diffoscope.
  • Work Left:

Future work

  • Based on the research work and implementation done during Coding Period make diffoscope better and enhance ignoring capabilities of diffoscope.
  • Improve the parallel processing feature of diffoscope. This particular problem is hard to understand and implement.
  • Make diffoscope better by solving exsting bugs.

Acknowledgement

I would like to express my deepest gratitude to Lunar for mentoring me throughout Google Summer of Code program and for being cool. Lunar’s deep knowledge regarding diffoscope and Python skills helped me a lot throughout the project and we literally had great discussions. I would also like to thank Debaian community and Google for giving me this opportunity. Special thanks to Reproducible Builds folks for all the guidance!

22 August, 2016 02:02PM

hackergotchi for DebConf team

DebConf team

Proposing speakers for DebConf17 (Posted by DebConf17 team)

As you may already know, next DebConf will be held at Collège de Maisonneuve in Montreal from August 6 to August 12, 2017. We are already thinking about the conference schedule, and the content team is open to suggestions for invited speakers.

Priority will be given to speakers who are not regular DebConf attendees, who are more likely to bring diverse viewpoints to the conference.

Please keep in mind that some speakers may have very busy schedules and need to be booked far in advance. So, we would like to start inviting speakers in the middle of September 2016.

If you would like to suggest a speaker to invite, please follow the procedure described on the Inviting Speakers page of the DebConf wiki.


DebConf17 team

22 August, 2016 01:44PM by DebConf Organizers

Vincent Sanders

Down the rabbit hole

My descent began with a user reporting a bug and I fear I am still on my way down.

Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg
The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg
We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete

I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null

The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

AFL master node display
To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique

All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0) || (height == 0))          
return BMP_DATA_ERROR;
if (height < 0) {
bmp->reversed = true;
height = -height;
}

The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:
  • AFL is an easy to use and immensely powerful and effective tool. State of the art has taken a massive step forward.
  • The test harness is part of the test! make sure it does not behave in a poor manner and cause issues itself.
  • Even a library with extensive test coverage and real world users can benefit from this technique. But it remains to be seen how quickly the rate of return will reduce after the initial fixes.
  • Use the right tool for the job! Ensure you head the warnings in the manual as AFL uses a lot of resources including CPU, disc and memory.
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

22 August, 2016 12:24PM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Michal Čihař

Michal Čihař

Continuous integration on multiple platforms

Over the weekend I've played with continuous integration for Gammu to make it run on more platforms. I had to remember many things from the Windows world on the way and the solution is not yet complete, but the basic build is working, the only problematic part are external dependencies.

First of all we already have Linux builds on Travis CI. These cover compilation with both GCC and Clang compilers, hopefully covering most of the possible problems.

Recently I've added OS X builds on Travis CI, what was pretty much painless and worked out of the box.

The next major architecture to support is Windows. Once I've discovered AppVeyor I thought it might be the way to go. The have free plans for open-source projects (though it has only one parallel build compared to four provided by Travis CI).

As our build system is cross platform based on CMake, it should work pretty much out of the box, right? Well almost, tweaking the basics took some time (unfortunately there is no CMake support on AppVeyor, so you have to script it a bit).

The most painful things on the way:

  • finding our correct way to invoke build and testsuite
  • our code was broken on Windows, making the testsuite to fail
  • how to work with power shell (no, I'm not going to like it)
  • how to download and install executable to PATH
  • test output integration with AppVeyor - done using XSLT transformation and uploading test results manually
  • 32-bit / 64-bit mess, CMake happily finds 32-bit libs during the 64-bit build and vice versa, what makes the build fail later when linking - fixed by trying if code can be built with given library
  • 64-bit code crashes in dummy driver, causing testsuite failures (this has to be something Windows specific as the code works fine on 64-bit Linux) - this seems to be caused by too big allocations on stack, moving them to heap will fix this

You can check our current appveyor.yml in case you're going to try something similar. Current build results are on AppVeyor.

As a nice side effect, we now have up to date Windows binaries for Gammu.

Filed under: Debian English Gammu | 0 comments

22 August, 2016 10:00AM

NOKUBI Takatsugu

The 9th typhoon looks like Debian swirl logo

According to my follower’s tweet:

The typhoon image and horizontal flipped Debian logo looks same.

22 August, 2016 09:01AM by knok

Zlatan Todorić

When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it.

But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me.

On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me.

I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on.

And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing?

One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO".

NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it.

Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features).

This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team.

Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here).

But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached.

Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues.

While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm.

To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem.

P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

22 August, 2016 06:45AM by Zlatan Todoric

hackergotchi for Christian Perrier

Christian Perrier

[LIFE] Running activities - Ultra Trail du Mont-Blanc

Hello dear readers,

It's been ages since I last blogged. Being far less active in Debian than I've been in the past, I guess this is a logical consequence.

However, I'm still active as you may witness if you read the debian-boot mailing list : I still consider myself part of the D-I team and I'm maintaining a few sports-related packages.

Most know what has taken precedence over Debian development, namely trail and ultra-trail running. And, well, it hasn't decreased, far from that : I ran about 10 races already this year....6 of them being above 50km and I ran my favourite 100km moutain race in early July for the second year in a row.

So, the upcoming week, I'll be trying to reach what is usually considered as the Grail of ultra-trail runners : the Ultra-Trail du Mont-Blanc race in Chamonix.

The race is fairly simple : run all around the Mont-Blanc summits, for a 160km race with a bit less than 10,000 meters positive climb. The race itself takes place between 800 and 2700 meters (so no "high mountain") and I expect to complete it (if I succeed) in about 40 hours.

I'm very confident (maybe too much?) as I successfully completed a much more difficult race last year (only 144km, but over 11,000 meters positive climb and a much more difficult path...it took me over 50 hours to complete it).

You can follow me on the live tracking site. The race starts on Friday August 26th, 18:00 CET DST.

I everything goes well, I have great projects for next year, including a 100-mile race in Colorado in August (we'll be traveling in USA for over 3 weeks, peaking with the solar eclipse of August 21st in Kansas City).

22 August, 2016 05:47AM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

go-wmata - golang bindings to the DC metro system

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

22 August, 2016 02:16AM

August 21, 2016

hackergotchi for Cyril Brulebois

Cyril Brulebois

Freelance Debian consultant: running DEBAMAX

Executive summary

Since October 2015, I've been running a FLOSS consulting company, specialized on Debian, called DEBAMAX.

DEBAMAX logo

Longer version

Everything started two years ago. Back then I blogged about one of the biggest changes in my life: trying to find the right balance between volunteer work as a Debian Developer, and entrepreneurship as a Freelance Debian consultant. Big change because it meant giving up the comfort of the salaried world, and figuring out whether working this way would be sufficient to earn a living…

I experimented for a while under a simplified status. It comes with a number of limitations but that’s a huge win compared to France’s heavy company-related administrativia. Here’s what it looked like, everything being done online:

  • 1 registration form to begin with: wait a few days, get an identifier from INSEE, mention it in your invoices, there you go!

  • 4 tax forms a year: taxes can be declared monthly or quarterly, I went for the latter.

A number of things became quite clear after a few months:

  • I love this new job! Sharing my Debian knowledge with customers, and using it to help them build/improve/stabilise their products and their internal services feels great!

  • Even if I wasn't aware of that initially, it seems like I've got a decent network already: Debian Developers, former coworkers, and friends thought about me for their Debian-related tasks. It was nice to hear about their needs, say yes, sign paperwork, and start working right away!

  • While I'm trying really hard not to get too optimistic (achieving a given turnover on the first year doesn't mean you're guaranteed to do so again the following year), it seemed to go well enough for me to consider switching from this simplified status to a full-blown company.

Thankfully I was eligible to being accompanied by the local Chamber of Commerce and Industry (CCI Rennes), which provides teaching sessions for new entrepreneurs, coaching, and meeting opportunities (accountants, lawyers, insurance companies, …). Summer in France is traditionally rather quiet (read: almost everybody is on vacation), so DEBAMAX officially started operating in October 2015. Besides different administrative and accounting duties, running this company doesn't change the way I've been working since July 2014, so everything is fine!

As before, I won't be writing much about it through my personal blog, except for an occasional update every other year; if you want to follow what's happening with DEBAMAX:

  • Website: debamax.com — in addition to the usual company, services, and references sections, it features a blog (with RSS) where some missions are going to be detailed (when it makes sense to share and when customers are fine with it). Spoiler alert: Tails is likely to be the first success story there. ;)
  • Twitter: @debamax — which is going to be retweeted for a while from my personal account, @CyrilBrulebois.

21 August, 2016 10:35PM

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/30-33

not much to report but I got at least some RC bugs fixed in the last weeks. again mostly perl stuff:

  • #759979 – src:simba: "simba: FTBFS: RoPkg::Rsync ...failed! (needed)"
    keep ExtUtils::AutoInstall from downlaoding stuff, upload to DELAYED/7
  • #817549 – src:libropkg-perl: "libropkg-perl: Removal of debhelper compat 4"
    use debhelper compatibility level 5, upload to DELAYED/7
  • #832599 – iodine: "Fails to start after upgrade"
    update service file and use deb-systemd-helper in postinst
  • #832832 – src:perlbrew: "perlbrew: FTBFS: Tests failures"
    add patch to deal with removed old perl version (pkg-perl)
  • #832833 – src:libtest-valgrind-perl: "libtest-valgrind-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #832853 – src:libmojomojo-perl: "libmojomojo-perl: FTBFS: Tests failures"
    close, the underlying problem is fixed (pkg-perl)
  • #832866 – src:libclass-c3-xs-perl: "libclass-c3-xs-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #834210 – libdancer-plugin-database-core-perl: "libdancer-plugin-database-perl: FTBFS: Failed 1/5 test programs. 6/45 subtests failed."
    upload new upstream release (pkg-perl)
  • #834793 – libgit-wrapper-perl: "libgit-wrapper-perl: FTBFS: t/basic.t whitespace changes"
    add patch from upstream bug (pkg-perl)

21 August, 2016 09:56PM

hackergotchi for David Moreno

David Moreno

WIP: Perl bindings for Facebook Messenger

A couple of weeks ago I started looking into wrapping the Facebook Messenger API into Perl. Since all the calls are extremely simple using a REST API, I thought it could be easier and simpler even, to provide a small framework to hook bots using PSGI/Plack.

So I started putting some things together and with a very simple interface you could do a lot:

use strict;
use warnings;
use Facebook::Messenger::Bot;

my $bot = Facebook::Messenger::Bot->new({
    access_token   => '...',
    app_secret     => '...',
    verify_token   => '...'
});

$bot->register_hook_for('message', sub {
    my $bot = shift;
    my $message = shift;

    my $res = $bot->deliver({
        recipient => $message->sender,
        message => { text => "You said: " . $message->text() }
    });
    ...
});

$bot->spin();

You can hook a script like that as a .psgi file and plug it in to whatever you want.

Once you have some more decent user flow and whatnot, you can build something like:



…using a simple script like this one.

The work is not finished and not yet CPAN-ready but I’m posting this in case someone wants to join me in this mini-project or have suggestions, the work in progress is here.

Thanks!

21 August, 2016 05:18PM

Cosmetic changes to my posts archive

I’ve been doing a lot of cosmetic/layout changes to the nearly 750 posts in my blog’s archive. I apologize if this has broken some feed readers or aggregators. It appears like Hexo still needs better syndication support.

21 August, 2016 04:55PM

Running find with two or more commands to -exec

I spent a couple of minutes today trying to understand how to make find (1) to execute two commands on the same target.

Instead of this or any similar crappy variants:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" && rm -rf "{}" \;

Try something like this:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" \; -exec rm -rf "{}" \;

Which is:

$ find . -exec command {} \; -exec other command {} \;

And you’re good to go.

21 August, 2016 04:11PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.2.9.0

A new upstream release 3.2.9 of Eigen is now reflected in a new RcppEigen release 0.3.2.9.0 which got onto CRAN late yesterday and is now going into Debian. Once again, Yixuan Qiu did the heavy lifting of merging upstream (and two local twists we need to keep around). Another change is by James "coatless" Balamuta who added a row exporter.

The NEWS file entry follows.

Changes in RcppEigen version 0.3.2.9.0 (2016-08-20)

  • Updated to version 3.2.9 of Eigen (PR #37 by Yixuan closing #36 from Bob Carpenter of the Stan team)

  • An exporter for RowVectorX was added (thanks to PR #32 by James Balamuta)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 August, 2016 12:21PM

August 20, 2016

Daniel Stender

Collected notes from Python packaging

Here are some collected notes on some particular problems from packaging Python stuff for Debian, and more are coming up like this in the future. Some of the issues discussed here might be rather simple and even benign for the experienced packager, but maybe this is be helpful for people coming across the same issues for the first time, wondering what's going wrong. But some of the things discussed aren't easy. Here are the notes for this posting, there is no particular order.

UnicodeDecoreError on open() in Python 3 running in non-UTF-8 environments

I've came across this problem again recently packaging httpbin 0.5.0. The build breaks the following way e.g. trying to build with sbuild in a chroot, that's the first run of setup.py with the default Python 3 interpreter:

I: pybuild base:184: python3.5 setup.py clean 
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
E: pybuild pybuild:274: clean: plugin distutils failed with: exit code=1: python3.5 setup.py clean 

One comes across UnicodeDecodeError-s quite oftenly on different occasions, mostly related to Python 2 packaging (like here). Here it's the case that in setup.py the long_description is tried to be read from the opened UTF-8 encoded file README.rst:

long_description = open(
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()

This is a problem for Python 3.5 (and other Python 3 versions) when setup.py is executed by an interpreter run in a non-UTF-8 environment1:

$ LANG=C.UTF-8 python3.5 setup.py clean
running clean
$ LANG=C python3.5 setup.py clean
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
$ LANG=C python2.7 setup.py clean
running clean

The reason for this error is, the default encoding for file object returned by open() e.g. in Python 3.5 is platform dependent, so that read() fails on that when there's a mismatch:

>>> readme = open('README.rst')
>>> readme
<_io.TextIOWrapper name='README.rst' mode='r' encoding='ANSI_X3.4-1968'>

Non-UTF-8 build environments because $LANG isn't particularly set at all or set to C are common or even default in Debian packaging, like in the continuous integration resp. test building for reproducible builds the primary environment features that (see here). That's also true for the base system of the sbuild environment:

$ schroot -d / -c unstable-amd64-sbuild -u root
(unstable-amd64-sbuild)root@varuna:/# locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

A problem like this is solved mostly elegant by installing some workaround in debian/rules. A quick and easy fix is to add export LC_ALL=C.UTF-8 here, which supersedes the locale settings of the build environment. $LC_ALL is commonly used to change the existing locale settings, it overrides all other locale variables with the same value (see here). C.UTF-8 is an UTF-8 locale which is available by default in a base system, it could be used without installing the locales package (or worse, the huge locales-all package):

(unstable-amd64-sbuild)root@varuna:/# locale -a
C
C.UTF-8
POSIX

This problem ideally should be taken care of upstream. In Python 3, the default open() is io.open(), for which the specific encoding could be given, so that the UnicodeDecodeError vanishes. Python 2 uses os.open() for open(), which doesn't support that, but has io.open(), too. A fix which works for both Python branches goes like this:

import io
long_description = io.open(
    os.path.join(os.path.dirname(__file__), 'README.rst'), encoding='utf-8').read()

non-deterministic order of requirements in egg-info/requires.txt

This problem appeared in prospector/0.11.7-5 in the reproducible builds test builds, that was the first package of Prospector running on Python 32. It was revealed that there were differences in the sorting order of the [with_everything] dependencies resp. requirements in prospector-0.11.7.egg-info/requires.txt if the package was build on varying systems:

$ debbindiff b1/prospector_0.11.7-6_amd64.changes b2/prospector_0.11.7-6_amd64.changes 
{...}
├── prospector_0.11.7-6_all.deb
│   ├── file list
│   │ @@ -1,3 +1,3 @@
│   │  -rw-r--r--   0        0        0        4 2016-04-01 20:01:56.000000 debian-binary
│   │ --rw-r--r--   0        0        0     4343 2016-04-01 20:01:56.000000 control.tar.gz
│   │ +-rw-r--r--   0        0        0     4344 2016-04-01 20:01:56.000000 control.tar.gz
│   │  -rw-r--r--   0        0        0    74512 2016-04-01 20:01:56.000000 data.tar.xz
│   ├── control.tar.gz
│   │   ├── control.tar
│   │   │   ├── ./md5sums
│   │   │   │   ├── md5sums
│   │   │   │   │┄ Files in package differ
│   ├── data.tar.xz
│   │   ├── data.tar
│   │   │   ├── ./usr/share/prospector/prospector-0.11.7.egg-info/requires.txt
│   │   │   │ @@ -1,12 +1,12 @@
│   │   │   │  
│   │   │   │  [with_everything]
│   │   │   │ +pyroma>=1.6,<2.0
│   │   │   │  frosted>=1.4.1
│   │   │   │  vulture>=0.6
│   │   │   │ -pyroma>=1.6,<2.0

Reproducible builds folks recognized this to be a toolchain problem and set up the issue randonmness_in_python_setuptools_requires.txt to cover this. Plus, a wishlist bug against python-setuptools was filed to fix this. The patch which was provided by Chris Lamb adds sorting of dependencies in requires.txt for Setuptools by adding sorted() (stdlib) to _write_requirements() in command/egg_info.py:

--- a/setuptools/command/egg_info.py
+++ b/setuptools/command/egg_info.py
@@ -406,7 +406,7 @@ def warn_depends_obsolete(cmd, basename, filename):
 def _write_requirements(stream, reqs):
     lines = yield_lines(reqs or ())
     append_cr = lambda line: line + '\n'
-    lines = map(append_cr, lines)
+    lines = map(append_cr, sorted(lines))
     stream.writelines(lines)

O.k. In the toolchain, no instance sorts these requirements properly if differences appear, but what is the reason for these differences in the Prospector packages, though? The problem is somewhat subtle. In setup.py, [with_everything] is a dictionary entry of _OPTIONAL (which is used for extras_require) that is created by a list comprehension out of the other values in that dictionary. The code goes like this:

_OPTIONAL = {
    'with_frosted': ('frosted>=1.4.1',),
    'with_vulture': ('vulture>=0.6',),
    'with_pyroma': ('pyroma>=1.6,<2.0',),
    'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
}
_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]

The result, the new _OPTIONAL dictionary including the key with_everything (which w/o further sorting is the source of the list of requirements requires.txt) e.g. looks like this (code snipped run through IPython):

In [3]: _OPTIONAL
Out[3]: 
{'with_everything': ['vulture>=0.6', 'pyroma>=1.6,<2.0', 'frosted>=1.4.1'],
 'with_vulture': ('vulture>=0.6',),
 'with_pyroma': ('pyroma>=1.6,<2.0',),
 'with_frosted': ('frosted>=1.4.1',),
 'with_pep257': ()}

That list comprehension iterates over the other dictionary entries to gather the value of with_everything, and – Python programmers know that of course – dictionaries are not indexed and therefore the order of the key-value pairs isn't fixed, but is determined by certain conditions from outside the interpreter. That's the source for the non-determinism of this Debian package revision of Prospector.

This problem has been fixed by a patch, which just presorts the list of requirements before it gets added to _OPTIONAL:

@@ -76,8 +76,8 @@
     'with_pyroma': ('pyroma>=1.6,<2.0',),
     'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
 }
-_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]
-
+with_everything = [req for req_list in _OPTIONAL.values() for req in req_list]
+_OPTIONAL['with_everything'] = sorted(with_everything)

In comparison to the list method sort(), the function sorted() does not change iterables in-place, but returns a new object, both could be used. As a side note, egg-info/requires.txt isn't even needed, but that's another issue.


  1. As an alternative to prefixing LC_ALL, env -i could be used to get an empty environment. 

  2. 0.11.7-4 already but this package revision was in experimental due to the switch to Python 3 and therefore not tested by reproducible builds continuous integration. 

20 August, 2016 11:17PM by Daniel Stender

hackergotchi for Francois Marier

Francois Marier

Remplacer un disque RAID défectueux

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

20 August, 2016 10:00PM

Remplacer un disque RAID défectueux

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

20 August, 2016 10:00PM

Jose M. Calhariz

Availabilty of at at the Major Linux Distributions

In this blog post I will cover what versions of software at is used by the leading Linux Distributions as reported by LWN.

Also

Currently some distributions are lagging on the use of the latest at software.

20 August, 2016 06:11PM by Jose M. Calhariz

Russell Coker

Basics of Backups

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Failing Systems – update 2016-08-22

When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

When in doubt make a backup.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

20 August, 2016 02:04AM by etbe

August 19, 2016

hackergotchi for Joey Hess

Joey Hess

keysafe alpha release

Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.

Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.

4.png
(Above is for the password "makesad spindle stick")

If you'd like to be an early adopter, install it like this:

sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity
stack install keysafe

Run ~/.local/bin/keysafe --backup --store-local to back up a gpg key to ~/.keysafe/objects/local/

I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:

~/.local/bin/keysafe --benchmark | mail keysafe@joeyh.name -s benchmark

Bonus announcement: http://hackage.haskell.org/package/zxcvbn-c/ is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.

PS: Past 50% of my goal on Patreon!

19 August, 2016 11:48PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.3: Lots of new Fixed Income functions

A release of RQuantLib is now on CRAN and in Debian. It contains a lot of new code contributed by Terry Leitch over a number of pull requests. See below for full details but the changes focus on Fixed Income and Fixed Income Derivatives, and cover swap, discount curves, swaptions and more.

In the blog post for the previous release 0.4.2, we noted that a volunteer was needed for a new Windows library build of QuantLib for Windows to replace the outdated version 1.6 used there. Josh Ulrich stepped up, and built them. Josh and I tried for several month to get the win-builder to install these, but sadly other things took priority and we were unsuccessful. So this release will not have Windows binaries on CRAN as QuantLib 1.8 is not available there. Instead, you can use the ghrr drat and do

if (!require("drat")) install.packages("drat")
drat::addRepo("ghrr")
install.packages("RQuantLib")

to fetch prebuilt Windows binaries from the ghrr drat. Everybody else gets sources from CRAN.

The full changes are detailed below.

Changes in RQuantLib version 0.4.3 (2016-08-19)

  • Changes in RQuantLib code:

    • Discount curve creation has been made more general by allowing additional arguments for day counter and fixed and floating frequency (contributed by Terry Leitch in #31, plus some work by Dirk in #32).

    • Swap leg parameters are now in combined variable and allow textual description (Terry Leitch in #34 and #35)

    • BermudanSwaption has been modfied to take option expiration and swap tenors in order to enable more general swaption structure pricing; a more general search for the swaptions was developed to accomodate this. Also, a DiscountCurve is allowed as an alternative to market quotes to reduce computation time for a portfolio on a given valuation date (Terry Leitch in #42 closing issue #41).

    • A new AffineSwaption model was added with similar interface to BermudanSwaption but allowing for valuation of a European exercise swaption utlizing the same affine methods available in BermudanSwaption. AffineSwaption will also value a Bermudan swaption, but does not take rate market quotes to build a term structure and a DiscountCurve object is required (Terry Leitch in #43).

    • Swap tenors can now be defined up to 100 years (Terry Leitch in #48 fising issue #46).

    • Additional (shorter term) swap tenors are now defined (Guillaume Horel in #49, #54, #55).

    • New SABR swaption pricer (Terry Leitch in #60 and #64, small follow-up by Dirk in #65).

    • Use of Travis CI has been updated and switch to maintained fork of deprecated mainline.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 August, 2016 09:18PM

Simon Désaulniers

[GSOC] Final report




The Google Summer of Code is now over. It has been a great experience and I’m very glad I’ve been able to make it. I’ve had the pleasure to contribute to a project showing very good promise for the future of communication: Ring. The words “privacy” and “freedom” in terms of technologies are being more and more present in the mind of people. All sorts of projects wanting to achieve these goals are coming to life each days like decentralized web networks (ZeroNet for e.g.), blockchain based applications, etc.

Debian

I’ve had the great opportunity to go to the Debian Conference 2016. I’ve been introduced to the debian community and debian developpers (“dd” in short :p). I was lucky to meet with great people like the president of the FSF, John Sullivan. You can have a look at my Debian conference report here.

If you want to read my debian reports, you can do so by browsing the “Google Summer Of Code” category on this blog.

What I have done

Ring is now in official debian repositories since June 30th. This is a good news for the GNU/Linux community. I’m proud to say that I’ve been able to contribute to debian by working on OpenDHT and developing new functionalities to reduce network traffic. The goal behind this was to finally optimize the data persistence traffic consumption on the DHT.

Github repository: https://github.com/savoirfairelinux/opendht

Queries

Issues:

  • #43: DHT queries

Pull requests:

  • #79: [DHT] Queries: remote values filtering
  • 93: dht: return consistent query from local storage
  • #106: [dht] rework get timings after queries in master

Value pagination

Issues:

  • #71: [DHT] value pagination

Pull requests:

  • #110: dht: Value pagination using queries
  • #113: dht: value pagination fix

Indexation (feat. Nicolas Reynaud)

Pull requests:

  • #77: pht: fix invalid comparison, inexact match lookup
  • #78: [PHT] Key consistency

General maintenance of OpenDHT

Issues:

  • #72: Packaging issue for Python bindings with CMake: $DESTDIR not honored
  • #75: Different libraries built with Autotools and CMake
  • #87: OpenDHT does not build on armel
  • #92: [DhtScanner] doesn’t compile on LLVM 7.0.2
  • #99: 0.6.2 filenames in 0.6.3

Pull requests:

  • #73: dht: consider IPv4 or IPv6 disconnected on operation done
  • #74: [packaging] support python installation with make DESTDIR=$DIR
  • #84: [dhtnode] user experience
  • #94: dht: make main store a vector>
  • #94: autotools: versionning consistent with CMake
  • #103: dht: fix sendListen loop bug
  • #106: dht: more accurate name for requested nodes count
  • #108: dht: unify bootstrapSearch and refill method using node cache

View by commits

You can have a look at my work by commits just by clicking this link: https://github.com/savoirfairelinux/opendht/commits/master?author=sim590

What’s left to be done

Data persistence

The only thing left before achieving the totality of my work is to rigorously test the data persistence behavior to demonstrate the network traffic reduction. To do so we use our benchmark python module. We are able to analyse traffic and produce plots like this one:


Plot: 32 nodes, 1600 values with normal condition test.

This particular plot was drawn before the enhancements. We are confident to improve the results using my work produced during the GSOC.

TCP

In the middle of the GSOC, we soon realized that passing from UDP to TCP would ask too much efforts in too short lapse of time. Also, it is not yet clear if we should really do that.

19 August, 2016 04:04PM

Olivier Grégoire

Conclusion Google Summer of Code 2016

SmartInfo project with Debian alt text

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am a technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status
Daemon Merged
Lib Ring Client (LRC) On Review
Gnome client On Review
Remove unused code   Merged


What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done?
Call ID The identification number of the call Yes
Resolution Local and remote Yes
Framerate Local and remote Yes
Codec Audio and video in local and remote     Yes
Bandwidth Download and upload No
Performance use CPU, GPU, RAM No
Security level In SIP call No
Connection time   No
Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.
alt text
To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.
alt text
source: ring.cx

And this is how I implemented my project.
alt text

5. Future of the Project

  • Add background on the gnome client
  • Implement the API smartInfoHub in all the other clients
  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

Weekly report link

Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

19 August, 2016 12:57PM

Conclusion Google Summer of Code 2016

SmartInfo project with Debian alt text

1. Me

Before getting into the thick of my project, let me present myself:
I am Olivier Grégoire (Gasuleg), and I study IT engineering at École de Technologie supérieure in Montreal.
I am an technician in electronics, and I began object-oriented programming just last year.
I applied to GSoC because I loved the concept of the project that I would work on and I really wanted to be part of it. I also wanted to discover the word of the free software.

2. My Project

During this GSoC, I worked on the Ring project.

“Ring is a free software for communication that allows its users to make audio or video calls, in pairs or groups, and to send messages, safely and freely, in confidence.

Savoir-faire Linux and a community of contributors worldwide develop Ring. It is available on GNU/Linux, Windows, Mac OSX and Android. It can be associated with a conventional phone service or integrated with any connected object.

Under this very easy to use software, there is a combination of technologies and innovations opening all kinds of perspectives to its users and developers.

Ring is a free software whose code is open. Therefore, it is not the software that controls you.

With Ring, you take control of your communication!

Ring is an open source software under the GPL v3 license. Everyone can verify the codes and propose new ones to improve the software’s performace. It is a guarantee of transparency and freedom for everyone!”
Source: ring.cx

The problem is about the typical user of Ring, the one who don’t use the terminal to launch Ring. He has no information about what has happened in the system. My goal is to create a tool that display statistics of Ring.

3. Quick Explanation of What My Program Can Do

The Code

Here are the links to the code I was working on all throughout the Google Summer of Code (You can see what I have done after the GSoC by clicking on the newest patchs):

Patch Status
Daemon On Review
Lib Ring Client (LRC) On Review
Gnome client On review
Remove unused code   Merged

!!!!!CHANGE LINK TO PUT THE LATEST PATCHES BEFORE THE END OF GSOC!!!!!

What Can Be Displayed?
This is the final list of information I can display and some ideas on what information we could display in the future:

Information     Details Done?
Call ID The identification number of the call Yes
Resolution Local and remote Yes
Framerate Local and remote Yes
Codec Audio and video in local and remote     Yes
Bandwidth Download and upload No
Performance use CPU, GPU, RAM No
Security level In SIP call No
Connection time   No
Packets lost   No


To launch it you need to right click on the call and click on “Show advanced information”.
alt text
To stop it, same thing: right click on the call and click on “Hide advanced information”.

4. More Details About My Project

My program needs to retrieve information from the daemon (LibRing) and then display it in gnome client. So, I needed to create a patch for the daemon, the D-Bus layer (in the daemon patch), LibRingClient and the GNU/Linux (Gnome) client.

This is what the architecture of the project looks like.
alt text
source: ring.cx

And this is how I implemented my project.
alt text

5. Future of the Project

  • Add background on the gnome client
  • Implement the API smartInfoHub in all the other clients
  • Gather more information, such as bandwidth, resource consumption, security level, connection time, number of packets lost and anything else that could be deemed interesting
  • Display information for every participant in a conference call. I began to implement it for the daemon on patch set 25 .

6. Thanks

I would like to thank the following:
- The Google Summer of Code organisation, for this wonderful experience.
- Debian, for accepting my project proposal and letting me embark on this fantastic adventure.
- My mentor, Mr Guillaume Roguez, and all his team, for being there to help me.

19 August, 2016 12:57PM

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live 2016.20160819-1

A new – and unplanned – release in quick succession. I have uploaded testing packages to experimental which incorporate tex4ht into the TeX Live packages, but somehow the tex4ht transitional updated slipped into sid, and made many packages uninstallable. Well, so after a bit more testing let’s ship the beast to sid, meaning that tex4ht will finally updated from the last 2009 version to what is the current status in TeX Live.

texlive2016-debian

From the list of new packages I want to pick out the group of phf* packages that seem from a quick reading over the package documentations as very interesting.

But most important is the incorporation of tex4ht into the TeX Live packages, so please report bugs and shortcomings to the BTS. Thanks.

New packages

aurl, bxjalipsum, cormorantgaramond, notespages, phffullpagefigure, phfnote, phfparen, phfqit, phfquotetext, phfsvnwatermark, phfthm, table-fct, tocdata.

Updated packages

acmart, acro, biblatex-abnt, biblatex-publist, bxdpx-beamer, bxjscls, bxnewfont, bxpdfver, dccpaper, etex-pkg, europasscv, exsheets, glossaries-extra, graphics-def, graphics-pln, guitarchordschemes, ijsra, kpathsea, latexpand, latex-veryshortguide, ledmac, libertinust1math, markdown, mcf2graph, menukeys, mfirstuc, mhchem, mweights, newpx, newtx, optidef, paralist, parnotes, pdflatexpicscale, pgfplots, philosophersimprint, pstricks-add, showexpl, tasks, tetex, tex4ht, texlive-docindex, udesoftec, xcolor-solarized.

19 August, 2016 10:43AM by Norbert Preining

hackergotchi for Guido Günther

Guido Günther

Foreman's Ansible integration

Gathering from some recent discussions it seems to be not that well known that Foreman (a lifecycle tool for your virtual machines) does not only integrate well with Puppet but also with ansible. This is a list of tools I find useful in this regard:

  • The ansible-module-foreman ansible module allows you to setup all kinds of resources like images, compute resources, hostgroups, subnets, domains within Foreman itself via ansible using Foreman's REST API. E.g. creating a hostgroup looks like:

    - foreman_hostgroup:
        name: AHostGroup
        architecture: x86_64
        domain: a.domain.example.com
        foreman_host: "{{ foreman_host }}"
        foreman_user: "{{ foreman_user }}"
        foreman_pass: "{{ foreman_pw }}"
    
  • The foreman_ansible plugin for Foreman allows you to collect reports and facts from ansible provisioned hosts. This requires an additional hook in your ansible config like:

    [defaults]
    callback_plugins = path/to/foreman_ansible/extras/
    

    The hook will report to Foreman back after a playbook finished.

  • There are several options for creating hosts in Foreman via the ansible API. I'm currently using ansible_foreman_module tailored for image based installs. This looks in a playbook like:

    - name: Build 10 hosts
      foremanhost:
        name: "{{ item }}"
        hostgroup: "a/host/group"
        compute_resource: "hopefully_not_esx"
        subnet: "webservernet"
        environment: "{{ env|default(omit) }}"
        ipv4addr: {{ from_ipam|default(omit) }}"
        # Additional params to tag on the host
        params:
            app: varnish
            tier: web
            color: green
        api_user: "{{ foreman_user }}"
        api_password: "{{ foreman_pw }}"
        api_url: "{{ foreman_url }}"
      with_sequence:  start=1 end=10 format="newhost%02d"
    
  • The foreman_ansible_inventory is a dynamic inventory script for ansible that fetches all your hosts and groups via the Foreman REST APIs. It automatically groups hosts in ansible from Foreman's hostgroups, environments, organizations and locations and allows you to build additional groups based on any available host parameter (and combinations thereof). So using the above example and this configuration:

    [ansible]
    group_patterns = ["{app}-{tier}",
                      "{color}"]
    

    it would build the additional ansible groups varnish-web, green and put the above hosts into them. This way you can easily select the hosts for e.g. blue green deployments. You don't have to pass the parameters during host creation, if you have parameters on e.g. domains or hostgroups these are available too for grouping via group_patterns.

  • If you're grouping your hosts via the above inventory script and you use lots of parameters than having these displayed in the detail page can be useful. You can use the foreman_params_tab plugin for that.

There's also support for triggering ansible runs from within Foreman itself but I've not used that so far.

19 August, 2016 09:16AM

hackergotchi for Michal Čihař

Michal Čihař

Wammu 0.42

Yesterday, I've released Wammu 0.42. There are no major updates, more likely it's usual localization and minor bugfixes release.

As usual up to date packages are now available in Debian sid, Gammu PPA for Ubuntu or openSUSE buildservice for various RPM based distros.

Want to support further Wammu development? Check our donation options or support Gammu team on BountySource Salt.

Filed under: Debian English Gammu | 0 comments

19 August, 2016 04:00AM

hackergotchi for Eriberto Mota

Eriberto Mota

Debian: GnuPG 2, chroot and debsign

Since GPG 2 was set as default for Debian (Sid, August 2016), an error message appeared inside jails triggered by chroot, when using debuild/debsign commands:

clearsign failed: Inappropriate ioctl for device

The problem is that GPG 2 uses a dialog window to ask for a passphrase. This dialog window needs a tty (from /dev/pts/ directory). To solve the problem, you can use the following command (inside the jail):

# mount devpts -t devpts /dev/pts

Alternatively, you can add to /etc/fstab file in jail:

devpts /dev/pts devpts defaults 0 0

and use the command:

# mount /dev/pts

Enjoy!

19 August, 2016 01:38AM by Eriberto

Zlatan Todorić

Defcon24

I went to Defcon24 as Purism representative. It was (as usual) held in Las Vegas, the city of sin. In the same module as with DebConf, here we go with good, bad and ugly.

Good

Badges are really cool. You can find good hackers here and there (but very small number compared to total number). Some talks are good and workshop + village idea looks good (although I didn't manage to attend any workshop as there was place for 1100 and there were 22000 attendees). The movie night idea is cool and Arcade space (where you can play old arcade games, relax and hack and also listen to some cool music) is really lovely. Also you have a camp/village for kids learning things such as electronics, soldering etc but you need to pay attention that they don't see too much of twisted folks that also gather on this con. And that's it. Oh, yea, Dark Tangent appears actually to be cool dude.

Bad

One does not simply hold a so-called hacker conference in Las Vegas. Having a conference inside hotel/casino where you mix with gamblers and casino workes (for good or for bad) is simply not in hacker spirit and certainly brings all kind of people to the same place. Also, there were simply not enough space for 22000 Defcon attendees, and you don't get proud of having on average ONLY 40min lines. You get proud if you don't have lines! Organization is not the strongest part of Defcon.

Huge majority of attendees are not hackers. They are script kiddies, hacker wannabes, comic con people, few totally lost souls etc etc. That simply brings the quality of a conference down. Yes it is cool to have mix of many diverse people but not for the sake of just having people.

Ugly

They lack Code of Conduct (everyone knows I am not in favor of any writens rules how people should behave but after Defcon I clearly see need for it). Actually, tbh, they do have it but no one gives a damn about it. And you should report to Goons, more about them below. Sexism is huge here. I remember and hear about stories of sexual harassment in IT industry, but Debian somehow mitigated that before me entering its domains, so I never experienced it. The sheer number of sexist behavior on Defcon is tremendous. It appears to me that those people had lonely childhood and now they act as a spoiled 6 year old: they're spoiled, they need to yell to show their point, they have low and stupid sexist jokes and they simply think that is cool.

Majority of Goons (their coordinators or whatever) are simply idiots. I don't know do they feel they have some superpowers, or are drunk or just stupid but yelling on people, throwing low jokes on people, more yelling, cursing all the time, more yelling - simply doesn't work for me. So now you can see the irony of CoC on Defcon. They even like to say, hey we are old farts, let us our con be as we want it to be. So no real diversity there. Either it is their way, and god forsaken if you try to change something for better and make them stop cursing or throwing sexist jokes ("squeeze, people. together, touch each other, trust me it will feel good"), or highway.

Also it appears that to huge number of vocal people, word "fuck" has some fetish meaning. Either it needs to show how "fucking awesome this con or they are" or to "fucking tell few things about random fucking stuff". Thank you, but no thank you.

So what did I do during con. I attended few talks, had some discussion with people, went to one party (great DJs, again people doing stupid things, like breaking invertory to name just one of them) and had so much time (read "I was bored") that I bought domain, brough up server on which I configured nginx and cp'ed this blog to blog.zlatan.tech (yes, recently I added letsencrypt because it is, let me be in Defcon mood, FUCKING AWESOME GRRR UGH) and now I even made .onion domain for it. What can boredom do to people, right?

So the ultimate question is - would I go again to Defcon. I am strongly leaning to no, but in my nature is to give second chance and now I have more experience (and I also have thick skin so I guess I can play calm for one more round).

19 August, 2016 01:15AM by Zlatan Todoric

August 18, 2016

Simon Désaulniers

[GSOC] Week 10&11&12 Report

Week 10 & 11

During these two weeks, I’ve worked hard on paginating values on the DHT.

Value pagination

As explained on my post on data persistence, we’ve had network traffic issues. The solution we have found for this is to use the queries (see also this) to filter data on the remote peer we’re communicating with. The queries let us select fields of a value instead of fetching whole values. This way, we can fetch values with unique ids. The pagination is the process of first selecting all value ids for a given hash, then making a separate “get” request packet for each of the values.

This feature makes the DHT more friendly with UDP. In fact, UDP packets can be dropped when of size greater than the UDP MTU. Paginating values will help this as all UDP packets will now contain only one value.

Week 12

I’ve been working on making the “put” request lighter, again using queries. This is a key feature which will make it possible to enable data persistence. In fact, it enables us to send values to a peer only if it doesn’t already have the value we’re announcing. This will substantially reduce the overall traffic. This feature is still being tested. The last thing I have to do is to demonstrate the reduction of network traffic.

18 August, 2016 03:09PM

Zlatan Todorić

DebConf16 - new age in Debian community gathering

DebConf16

Finally got some time to write this blog post. DebConf for me is always something special, a family gathering of weird combination of geeks (or is weird a default geek state?). To be honest, I finally can compare Debian as hacker conference to other so-called hacker conferences. With that hat on, I can say that Debian is by far the most organized and highest quality conference. Maybe I am biased, but I don't care too much about that. I simply love Debian and that is no secret. So lets dive into my view on DebConf16 which was held in Cape Town, South Africa.

Cape Town

This was the first time we had conference on African continent (and I now see for the first time DebConf bid for Asia, which leaves only Australia and beautiful Pacific islands to start a bid). Cape Town by itself, is pretty much Europe-like city. That was kinda a bum for me on first day, especially as we were hosted at University of Cape Town (which is quite beautiful uni) and the surrounding neighborhood was very European. Almost right after the first day I was fine because I started exploring the huge city. Cape Town is really huge, it has by stats ~4mil people, and unofficially it has ~6mil. Certainly a lot to explore and I hope one day to be back there (I actually hope as soon as possible).

The good, bad and ugly

I will start with bad and ugly as I want to finish with good notes.

Racism down there is still HUGE. You don't have signs on the road saying that, but there is clearly separation between white and black people. The houses near uni all had fences on walls (most of them even electrical ones with sharp blades on it) with bars on windows. That just bring tensions and certainly doesn't improve anything. To be honest, if someone wants to break in they still can do easily so the fences maybe need to bring intimidation but they actually only bring tension (my personal view). Also many houses have sign of Armed Force Response (something in those lines) where in case someone would start breaking in, armed forces would come to protect the home.

Also compared to workforce, white appear to hold most of profit/big business positions and fields, while black are street workers, bar workers etc etc. On the street you can feel from time to time the tension between people. Going out to bars also showed the separation - they were either almost exclusively white or exclusively black. Very sad state to see. Sharing love and mixing is something that pushes us forward and here I saw clear blockades for such things.

The bad part of Cape Town is, and this is not only special to Cape Town but to almost all major cities, is that small crime is on wide scale. Pickpocketing here is something you must pay attention to it. To me, personally, nothing happened but I heard a lot of stories from my friends on whom were such activities attempted (although I am not sure did the criminals succeed).

Enough of bad as my blog post will not change this and it is a topic for debate and active involvement which I can't unfortunately do at this moment.

THE GOOD!

There are so many great local people I met! As I mentioned, I want to visit that city again and again and again. If you don't fear of those bad things, this city has great local cuisine, a lot of great people, awesome art soul and they dance with heart (I guess when you live in rough times, you try to use free time at your best). There were difference between white and black bars/clubs - white were almost like standard European, a lot of drinking and not much dancing, and black were a lot of dancing and not much drinking (maybe the economical power has something to do with it but I certainly felt more love in black bars).

Cape Town has awesome mountain, the Table Mountain. I went on hiking with my friends, and I must say (again to myself) - do the damn hiking as much as possible. After every hike I feel so inspired, that I will start thinking that I hate myself for not doing it more often! The view from Table mountain is just majestic (you can even see the Cape of Good Hope). The WOW moments are just firing up in you.

Now lets transfer to DebConf itself. As always, organization was on quite high level. I loved the badge design, it had a map and nice amount of information on it. The place we stayed was kinda not that good but if you take it into account that those a old student dorms (in we all were in female student dorm :D ) it is pretty fancy by its own account. Talks were near which is always good. The general layout of talks and front desk position was perfect in my opinion. All in one place basically.

Wine and Cheese this year was kinda funny story because of the cheese restrictions but Cheese cabal managed to pull out things. It was actually very well organized. Met some new people during the party/ceremony which always makes me grow as a person. Cultural mix on DebConf is just fantastic. Not only you learn a lot about Debian, hacking on it, but sheer cultural diversity makes this small con such a vibrant place and home to a lot.

Debian Dinner happened in Aquarium were I had nice dinner and chat with my old friends. Aquarium by itself is a thing where you can visit and see a lot of strange creatures that live on this third rock from Sun.

Speaking of old friends - I love that I Apollo again rejoined us (by missing the DebConf15), seeing Joel again (and he finally visited Banja Luka as aftermath!), mbiebl, ah, moray, Milan, santiago and tons of others. Of course we always miss a few such as zack and vorlon this year (but they had pretty okay-ish reasons I would say).

Speaking of new friends, I made few local friends which makes me happy and at least one Indian/Hindu friend. Why did I mention this separately - well we had an accident during Group Photo (btw, where is our Lithuanian, German based nowdays, photographer?!) where 3 laptops of our GSoC students were stolen :( . I was luckily enough to, on behalf of Purism, donate Librem11 prototype to one of them, which ended up being the Indian friend. She is working on real time communications which is of interest also to Purism for our future projects.

Regarding Debian Day Trip, Joel and me opted out and we went on our own adventure through Cape Town in pursue of meeting and talking to local people, finding out interesting things which proved to be a great decision. We found about their first Thursday of month festival and we found about Mama Africa restaurant. That restaurant is going into special memories (me playing drums with local band must always be a special memory, right?!).

Huh, to be honest writing about DebConf would probably need a book by itself and I always try to keep my posts as short as possible so I will try to stop here (maybe I write few bits in future more about it but hardly).

Now the notes. Although I saw the racial segregation, I also saw the hope. These things need time. I come from country that is torn apart in nationalism and religious hate so I understand this issues is hard and deep on so many levels. While the tensions are high, I see people try to talk about it, try to find solution and I feel it is slowly transforming into open society, where we will realize that there is only one race on this planet and it is called - HUMAN RACE. We are all earthlings, and as sooner we realize that, sooner we will be on path to really build society up and not fake things that actually are enslaving our minds.

I just want in the end to say thank you DebConf, thank you Debian and everyone could learn from this community as a model (which can be improved!) for future societies.

18 August, 2016 09:19AM by Zlatan Todoric

Norbert Tretkowski

No MariaDB MaxScale in Debian

Last weekend I started working on a MariaDB MaxScale package for Debian, of course with the intention to upload it into the official Debian repository.

Today I got pointed to an article by Michael "Monty" Widenius he published two days ago. It explains the recent license change of MaxScale from GPL so BSL with the release of MaxScale 2.0 beta. Justin Swanhart summarized the situation, and I could not agree more.

Looks like we will not see MaxScale 2.0 in Debian any time soon...

18 August, 2016 06:00AM

August 17, 2016

hackergotchi for Gunnar Wolf

Gunnar Wolf

Talking about the Debian keyring in Investigaciones Nucleares, UNAM

For the readers of my blog that happen to be in Mexico City, I was invited to give a talk at Instituto de Ciencias Nucleares, Ciudad Universitaria, UNAM.

I will be at Auditorio Marcos Moshinsky, on August 26 starting at 13:00. Auditorio Marcos Moshinsky is where we met for the early (~1996-1997) Mexico Linux User Group meetings. And... Wow. I'm amazed to realize it's been twenty years that I arrived there, young and innocent, the newest of what looked like a sect obsessed with world domination and a penguin fetish.

AttachmentSize
llavero_chico.png220.84 KB
llavero_orig.png1.64 MB

17 August, 2016 06:47PM by gwolf

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, July 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 136.6 work hours have been dispatched among 11 paid contributors. Their reports are available:

  • Antoine Beaupré has been allocated 4 hours again but in the end he put back his 8 pending hours in the pool for the next months.
  • Balint Reczey did 18 hours (out of 7 hours allocated + 2 remaining, thus keeping 2 extra hours for August).
  • Ben Hutchings did 15 hours (out of 14.7 hours allocated + 1 remaining, keeping 0.7 extra hour for August).
  • Brian May did 14.7 hours.
  • Chris Lamb did 14 hours (out of 14.7 hours, thus keeping 0.7 hours for next month).
  • Emilio Pozuelo Monfort did 13 hours (out of 14.7 hours allocated, thus keeping 1.7 hours extra hours for August).
  • Guido Günther did 8 hours.
  • Markus Koschany did 14.7 hours.
  • Ola Lundqvist did 14 hours (out of 14.7 hours assigned, thus keeping 0.7 extra hours for August).
  • Santiago Ruano Rincón did 14 hours (out of 14.7h allocated + 11.25 remaining, the 11.95 extra hours will be put back in the global pool as Santiago is stepping down).
  • Thorsten Alteholz did 14.7 hours.

Evolution of the situation

The number of sponsored hours jumped to 159 hours per month thanks to GitHub joining as our second platinum sponsor (funding 3 days of work per month)! Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 22 packages with a known CVE and the dla-needed.txt file likewise. That’s a sharp decline compared to last month.

Thanks to our sponsors

New sponsors are in bold.

2 comments | Liked this article? Click here. | My blog is Flattr-enabled.

17 August, 2016 02:45PM by Raphaël Hertzog

Jamie McClelland

Nice Work Apertium

For the last few years I have been periodically testing out apertium and today I did again and was pleasantly surprised with the quality of the english-spanish and spanish-english translations (and also their nifty web site translator).

So, I dusted off some of my geeky code to make it easier to use and continue testing.

For starters...

    sudo apt-get install apertium-en-es xclip coreutils

Then, I added the following to my .muttrc file:

    macro pager <F2> "<enter-command>set pipe_decode<enter><pipe-entry> sed '1,/^$/d' | apertium es-en | less<enter><enter-command>unset pipe_decode<enter>" "translate from spanish"

If you press F2 while reading a message in spanish it will print out the English translation.

If you use vim, you can create ~/.vim/plugins/apertium.vim with:

    function s:Translate()
        silent !clear
        execute "! apertium en-es " . bufname("%") . " | tee >(xclip)"
    endfunction
    command Translate :call <SID>Translate()

Then, you can type the command:

:Translate

And it will display the English to Spanish translation of the file you are editing and copy the translation into your clip board so you can paste it into your document.

17 August, 2016 02:07PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in July 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

DebConf 16

I was in South Africa for the whole week of DebConf 16 and gave 3 talks/BoF. You can find the slides and the videos in the links of their corresponding page:

I was a bit nervous about the third BoF (on using Debian money to fund Debian projects) but discussed with many persons during the week and it looks like the project evolved quite a bit in the last 10 years and while it’s still a sensitive topic (and rightfully so given the possible impacts) people are willing to discuss the issues and to experiment. You can have a look at the gobby notes that resulted from the live discussion.

I spent most of the time discussing with people and I did not do much technical work besides trying (and failing) to fix accessibility issues with tracker.debian.org (help from knowledgeable people is welcome, see #830213).

Debian Packaging

I uploaded a new version of zim to fix a reproducibility issue (and forwarded the patch upstream).

I uploaded Django 1.8.14 to jessie-backports and had to fix a failing test (pull request).

I uploaded python-django-jsonfield 1.0.1 a new upstream version integrating the patches I prepared in June.

I managed the (small) ftplib library transition. I prepared the new version in experimental, ensured reverse build dependencies do still build and coordinated the transition with the release team. This was all triggered by a reproducible build bug that I got and that made me look at the package… last time upstream had disappeared (upstream URL was even gone) but it looks like he became active again and he pushed a new release.

I filed wishlist bug #832053 to request a new deblog command in devscripts. It should make it easier to display current and former build logs.

Kali related Debian work

I worked on many issues that were affecting Kali (and Debian Testing) users:

  • I made an open-vm-tools NMU to get the package back into testing.
  • I filed #830795 on nautilus and #831737 on pbnj to forward Kali bugs to Debian.
  • I wrote a fontconfig patch to make it ignore .dpkg-tmp files. I also forwarded that patch upstream and filed a related bug in gnome-settings-daemon which is actually causing the problem by running fc-cache at the wrong times.
  • I started a discussion to see how we could fix the synaptics touchpad problem in GNOME 3.20. In the end, we have a new version of xserver-xorg-input-all which only depends on xserver-xorg-input-libinput and not on xserver-xorg-input-synaptics (no longer supported by GNOME). This is after upstream refused to reintroduce synaptics support.
  • I filed #831730 on desktop-base because KDE’s plasma-desktop is no longer using the Debian background by default. I had to seek upstream help to find out a possible solution (deployed in Kali only for now).
  • I filed #832503 because the way dpkg and APT manages foo:any dependencies when foo is not marked “Multi-Arch: allowed” is counter-productive… I discovered this while trying to use a firefox-esr:any dependency. And I filed #832501 to get the desired “Multi-Arch: allowed” marker on firefox-esr.

Thanks

See you next month for a new summary of my activities.

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

17 August, 2016 10:53AM by Raphaël Hertzog

hackergotchi for Michal Čihař

Michal Čihař

Weekly phpMyAdmin contributions 2016-W32

Tonight phpMyAdmin 4.0.10.17, 4.4.15.8, and 4.6.4 were released and you can probably see that there are quite some security issues fixed. Most of them are not really exploitable unless your PHP and webserver are poorly configured, but still it's good idea to upgrade.

If you are running Debian unstable, use our phpMyAdmin PPA for Ubuntu or use phpMyAdmin Docker image upgrading should be as simple as pulling new version.

Besides fixing security issues, we're generally hardening our infrastructure. I'm really grateful that Emanuel Bronshtein (@e3amn2l) is doing great review of all of our code and helps us in this area. This will really make our code and infrastructure much better.

Handled issues:

Filed under: Debian English phpMyAdmin | 0 comments

17 August, 2016 10:00AM

Revoking old PGP key

It has been already six years since I've moved to using RSA4096 PGP key. For various reasons, the old DSA key was still kept valid till today. This is no longer true and it has been revoked now.

The revoked key is DC3552E836E75604 and new one is 9C27B31342B7511D. In case you've signed the old one and not the new one (quite unlikely if you did not sign it more than six years ago), there has been migration document, where you can verify my new key being signed by the old one.

Filed under: Debian English | 0 comments

17 August, 2016 08:00AM

hackergotchi for Charles Plessy

Charles Plessy

Who finished DEP 5?

Many people worked on finishing DEP 5. I think that the blog of Lars does not show enough how collective the effort was.

Looking in the specification's text, one finds:

The following alphabetical list is incomplete; please suggest missing people:
Russ Allbery, Ben Finney, Sam Hocevar, Steve Langasek, Charles Plessy, Noah
Slater, Jonas Smedegaard, Lars Wirzenius.

The Policy's changelog mentions:

  * Include the new (optional) copyright format that was drafted as
    DEP-5.  This is not yet a final version; that's expected to come in
    the 3.9.3.0 release.  Thanks to all the DEP-5 contributors and to
    Lars Wirzenius and Charles Plessy for the integration into the
    Policy package.  (Closes: #609160)

 -- Russ Allbery <rra@debian.org>  Wed, 06 Apr 2011 22:48:55 -0700

and

debian-policy (3.9.3.0) unstable; urgency=low

  [ Russ Allbery ]
  * Update the copyright format document to the version of DEP-5 from the
    DEP web site and apply additional changes from subsequent discussion
    in debian-devel and debian-project.  Revise for clarity, to add more
    examples, and to update the GFDL license versions.  Thanks, Steve
    Langasek, Charles Plessy, Justin B Rye, and Jonathan Nieder.
    (Closes: #658209, #648387)

On my side, I am very grateful to Bill Alombert for having committed the document in the Git repository, which ended the debates.

17 August, 2016 04:08AM

hackergotchi for Sean Whitton

Sean Whitton

Tucson monsoon rains

When it rains in Tucson, people are able to take an unusually carefree attitude towards it. Although the storm is dramatic, and the amount of water means that the streets turn to rivers, everyone knows that it will be over in a few hours and the heat will return (and indeed, that’s why drain provision is so paltry).

In other words, despite the arresting thunderclaps, the weather is not threatening. By contrast, when there is a storm in Britain, one feels a faint primordial fear that one won’t be able to find shelter after the storm, in the cold and sodden woods and fields. Here, that threat just isn’t present. I think that’s what makes us feel so free to move around in the rain.

I rode my bike back from the gym in my $5 plastic shoes. The rain hitting my body was cold, but the water splashing up my legs and feet was warm thanks of the surface of the road—except for one area where the road was steep enough that the running water had already taken away all lingering heat.

17 August, 2016 03:28AM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, July 2016

I was assigned another 14.7 hours of work by Freexian's Debian LTS initiative and carried over 1 from last month. I worked a total of 15 hours, carrying over a fraction of an hour.

I spent another week in the Front Desk role and triaged various new CVEs for wheezy.

I spent the remainder of the time working on the next Linux stable updates (3.2.82 and Debian 3.2.81-2), but didn't release them - that will be done in the next few days.

17 August, 2016 12:12AM

August 16, 2016

hackergotchi for Lars Wirzenius

Lars Wirzenius

20 years ago I became a Debian developer

Today it is 23 years ago since Ian Murdock published his intention to develop a new Linux distribution, Debian. It also about 20 years since I became a Debian developer and made my first package upload.

In the time since:

  • I've retired a couple of times, to pursue other interests, and then un-retired.

  • I've maintained a bunch of different packages, most importantly the PGP2 software in the 90s. (I now only maintain software for which I'm also upstream, in order to make jokes about my upstream being an unco-operative jerk, and my packager being unhelpful in the extreme.)

  • Got kicked out from the Debian mailing lists for insulting another developer. Not my proudest moment. I was allowed back later, and I've tried to be polite ever since. (See also rules 6.)

  • I've been to a few Debconfs (3, 5, 6, 9, 10, 15). I'm looking forward to going to many more in the future. It's clear that seeing many project members at least every now and then has a very big impact on project cohesion.

  • I had a gig where I was paid to improve the technical quality of Debian. After a few months of bug fixing (which isn't my favourite pastime), I wrote piuparts in order to find new bugs. (I gave that project away many years ago, but it seems to still be going strong.)

  • I've almost ran for DPL twice, but I'm glad I didn't actually. I've carefully avoided any positions of power or responsibility in the project. (I live in fear that someone decides to nominate me for something where I'd actually have make important decisions.)

    Not being responsible means I can just ignore the project for a while when something annoying happens. (Or retire again.) With such a large project, eventually something really annoying does happen.

  • Came up with the DEP process with Zack and Dato. I also ran the second half of the DEP5 process to get the debian/copyright machine readable format accepted. (I'm no longer involved, though, and I don't think DEP is much now.)

  • I've taught several workshops about Debian packaging, including online for Debian-Women. It's always fun when others "get" how easy packaging really is, despite all the efforts of the larger variety in tooling and random web pages go to to obscure the fundamental simplicity.

  • Over the years Í've enjoyed many of the things developed within Debian (without claiming any credit for myself):

    • the policy manual, perhaps the most important technical achievement of the project

    • the social contract and Debian free software guidelines, unarguably the most important non-technical achievements of the project

    • the whole package management system, but especially apt

    • debhelper's dh, which made the work of packaging simple cases so easy it's nearly a no-brainer

    • d-i made me not hate installing Debian (although I think time is getting ripe to replace d-i with something new; catch me in a talkative mood at party to hear more)

    • Debian-Women made an almost immediate improvement to the culture of the larger project (even if there's still much too few women developers)

    • the diversity statement made me a lot happier about being a project member.

    I'd like to thank everyone who's worked on these and made them happen. These are important milestones in Debian.

  • I've opened my mount in a lot of places over the years, which means a lot of people know of me, but nobody can actually point at anything useful I've actually done. Which is why when I've given talks at, say, FOSDEM, I get introduced as "the guy who shared an office with Linus Torvalds a long time ago".

  • I've made a number of friends via participation in Debian. I've found jobs via contacts in Debian, and have even started a side business with someone.

It's been a good twenty years. And the fun ain't over yet.

16 August, 2016 03:47PM

Bits from Debian

Debian turns 23!

Today is Debian's 23rd anniversary. If you are close to any of the cities celebrating Debian Day 2016, you're very welcome to join the party!

If not, there's still time for you to organize a little celebration or contribution to Debian. For example, you can have a look at the Debian timeline and learn about the history of the project. If you notice that some piece of information is still missing, feel free to add it to the timeline.

Or you can scratch your creative itch and suggest a wallpaper to be part of the artwork for the next release.

Our favorite operating system is the result of all the work we have done together. Thanks to everybody who has contributed in these 23 years, and happy birthday Debian!

16 August, 2016 12:30PM by Laura Arjona Reina

hackergotchi for Keith Packard

Keith Packard

udevwrap

Wrapping libudev using LD_PRELOAD

Peter Hutterer and I were chasing down an X server bug which was exposed when running the libinput test suite against the X server with a separate thread for input. This was crashing deep inside libudev, which led us to suspect that libudev was getting run from multiple threads at the same time.

I figured I'd be able to tell by wrapping all of the libudev calls from the server and checking to make sure we weren't ever calling it from both threads at the same time. My first attempt was a simple set of cpp macros, but that failed when I discovered that libwacom was calling libgudev, which was calling libudev.

Instead of recompiling the world with my magic macros, I created a new library which exposes all of the (public) symbols in libudev. Each of these functions does a bit of checking and then simply calls down to the 'real' function.

Finding the real symbols

Here's the snippet which finds the real symbols:

static void *udev_symbol(const char *symbol)
{
    static void *libudev;
    static pthread_mutex_t  find_lock = PTHREAD_MUTEX_INITIALIZER;

    void *sym;
    pthread_mutex_lock(&find_lock);
    if (!libudev) {
        libudev = dlopen("libudev.so.1.6.4", RTLD_LOCAL | RTLD_NOW);
    }
    sym = dlsym(libudev, symbol);
    pthread_mutex_unlock(&find_lock);
    return sym;
}

Yeah, the libudev version is hard-coded into the source; I didn't want to accidentally load the wrong one. This could probably be improved...

Checking for re-entrancy

As mentioned above, we suspected that the bug was caused when libudev got called from two threads at the same time. So, our checks are pretty simple; we just count the number of calls into any udev function (to handle udev calling itself). If there are other calls in process, we make sure the thread ID for those is the same as the current thread.

static void udev_enter(const char *func) {
    pthread_mutex_lock(&check_lock);
    assert (udev_running == 0 || udev_thread == pthread_self());
    udev_thread = pthread_self();
    udev_func[udev_running] = func;
    udev_running++;
    pthread_mutex_unlock(&check_lock);
}

static void udev_exit(void) {
    pthread_mutex_lock(&check_lock);
    udev_running--;
    if (udev_running == 0)
    udev_thread = 0;
    udev_func[udev_running] = 0;
    pthread_mutex_unlock(&check_lock);
}

Wrapping functions

Now, the ugly part -- libudev exposes 93 different functions, with a wide variety of parameters and return types. I constructed a hacky macro, calls for which could be constructed pretty easily from the prototypes found in libudev.h, and which would construct our stub function:

#define make_func(type, name, formals, actuals)         \
    type name formals {                     \
    type ret;                       \
    static void *f;                     \
    if (!f)                         \
        f = udev_symbol(__func__);              \
    udev_enter(__func__);                   \
    ret = ((typeof (&name)) f) actuals;         \
    udev_exit();                        \
    return ret;                     \
    }

There are 93 invocations of this macro (or a variant for void functions) which look much like:

make_func(struct udev *,
      udev_ref,
      (struct udev *udev),
      (udev))

Using udevwrap

To use udevwrap, simply stick the filename of the .so in LD_PRELOAD and run your program normally:

# LD_PRELOAD=/usr/local/lib/libudevwrap.so Xorg 

Source code

I stuck udevwrap in my git repository:

http://keithp.com/cgi-bin/gitweb.cgi?p=udevwrap;a=summary

You can clone it using

$ git git://keithp.com/git/udevwrap

16 August, 2016 06:32AM

August 15, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

The road to TOR

Happy Independence Day to all. I had been looking forward to this day so I can use to share with my brothers and sisters what little I know about TOR . Independence means so many things to many people. For me, it means having freedom, valuing it and using it to benefit not just to ourselves but to people at large. And for that to happen, at least on the web, it has to rise above censorship if we are to get there at all. I am 40 years old, and if I can’t read whatever I want to read without asking the state-military-Corporate trinity than be damned with that. Debconf was instrumental as I was able to understand and share many of the privacy concerns that we all have. This blog post is partly a tribute to being part of a community and being part of Debconf16.

So, in that search for privacy couple of years ago, I came across TOR . TOR stands for ‘The Onion Router’ project. Explaining tor is simple. Let us take the standard way in which we approach the website using a browser or any other means.

a. We type out a site name, say debian.org in the URL/URI bar .
b. Now the first thing the browser would do is look into its DNS Cache to see if the name/URL has been used before. If it is something like debian.org which has been used before and is *fresh* and there is content already it would serve the content from the cache there itself.
c. In case, if it’s not or the content is stale or something, it would generate a DNS lookup through the various routing tables till the DNS IP Address is found and information relayed to the browser.
d. The browser takes the IP Address and opens a TCP connection to the server, you have the handshake happen and after that it’s business as usual.
e. In case if it doesn’t work, you could get errors like ‘Could not connect to server xyz’ or some special errors with error codes.

This is a much simplified version of what happens or goes through normally with most/all of the browsers.

One good way to see how the whole thing happens is to use traceroute and use the whois service.

For e.g. –

[$] traceroute debian.org

and then

[$] whois 5.153.231.4 | grep inetnum
inetnum: 5.153.231.0 - 5.153.231.255

Just using whois IP Address gives much more. I just shared a short version because I find it interesting that Debian has booked all 255 possible IP Addresses but speculating on that would be probably be a job for a different day.

Now the difference when using TOR are two things –

a. The conversation is encrypted (somewhat like using https but encrypted through the relays)
b. The conversation is relayed over 2-3 relays and it will give a somewhat different identification to the DNS server at the other end.
c. It is only at the end-points that the conversation will be in plain text.

For e.g. the TOR connection I’m using atm is from me – France (relay) – Switzerland (relay) – Germany (relay) – WordPress.com . So wordpress thinks that all the connection is happening via Germany while I’m here in India. It would also tells that I’m running MS-Windows some version and a different browser while I’m from somewhere in India, on Debian, using another browser altogether🙂

There are various motivations for doing that. For myself, I’m just a private person and do not need or want that any other person/s or even the State should be looking over my shoulder as to what I’m doing. And the argument that we need to spy on citizens because Terrorists are there doesn’t hold water over me. There are many ways in which they can pass messages even without tor or web. The Government-Corporate-Military just get more powerful if and when they know what common people think, do, eat etc.

So the question is how does you install tor if you a private sort of person . If you are on a Debian machine, you are one step closer to doing that.

So the first thing that you need to do is install the following –

$ sudo aptitude install ooniprobe python-certifi tor tor-geoipdb torsocks torbrowser-launcher

Once the above is done, then run torbrowser-launcher. This is how it would work out the first time it is run –

[$] torbrowser-launcher

Tor Browser Launcher
By Micah Lee, licensed under MIT
version 0.2.6
https://github.com/micahflee/torbrowser-launcher
Creating GnuPG homedir /home/shirish/.local/share/torbrowser/gnupg_homedir
Downloading and installing Tor Browser for the first time.
Downloading https://dist.torproject.org/torbrowser/update_2/release/Linux_x86_64-gcc3/x/en-US
Latest version: 6.0.3
Downloading https://dist.torproject.org/torbrowser/6.0.3/tor-browser-linux64-6.0.3_en-US.tar.xz.asc
Downloading https://dist.torproject.org/torbrowser/6.0.3/tor-browser-linux64-6.0.3_en-US.tar.xz
Verifying signature
Extracting tor-browser-linux64-6.0.3_en-US.tar.xz
Running /home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/start-tor-browser.desktop
Launching './Browser/start-tor-browser --detach'...

As can be seen above, you basically download the tor browser remotely from the website. Obviously, for this port 80 needs to be opened.

One of the more interesting things is that it tells you where it installs the browser.

/home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/start-tor-browser and then detaches.

The first time the TOR browser actually runs it looks something similar to this –

Torbrowser picture

Torbrowser picture

Additionally it would give you 4 choices. Depending on your need for safety, security and convenience you make a choice and live with it.

Now the only thing remaining to do is have an alias for your torbrowser. So I made

[$] alias tor

tor=/home/shirish/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/start-tor-browser

It is suggested that you do not use the same usernames on the onion network.

Also apart from the regular URL addresses such as ‘flossexperiences.wordpress.com’ you will also see sites such as https://www.abc12defgh3ijkl.onion.to (fictional address)

Now there would be others who would want to use the same/similar settings say as there are in their Mozilla Firefox installation.

To do that do the following steps –

a. First close down both Torbrowser and Mozilla Firefox .
b. Open your file browser and go to where your mozilla profile details are. In typical Debian installations it is at

~/.mozilla/firefox/5r7t1r92.default

In the next tab, navigate to –

~/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/TorBrowser/Data/Browser/profile.default

c. Now copy the following files over from your mozilla profile to your tor browser profile and you can resume where you left off.

    cert8.db
    chromeappsstore.sqlite
    content-prefs.sqlite
    cookies.sqlite
    formhistory.sqlite
    key3.db
    logins.json (Firefox 32 and above)
    mimeTypes.rdf
    permissions.sqlite
    persdict.dat
    places.sqlite
    signons3.txt (if exists)
    webappsstore.sqlite

and the following folders/directories

    bookmarkbackups
    chrome (if it exists)
    searchplugins (if it exists)

Once the above is done, fire up your torbrowser with the alias shared. This is usually put it in your .bashrc file or depending on whatever terminal interpreter you use, wherever the config file will be.

Welcome to the world of TOR. Now, after a time if you benefit from tor and would like to give back to the tor community, you should look up tor bridges and relay. As the blog post has become long enough, I would end it now and hopefully we can talk about tor bridges and relay some other day.


Filed under: Miscellenous Tagged: #anonymity, #Debconf16, #debian, #tor, #torbrowser, GNU, Linux, Privacy

15 August, 2016 10:31AM by shirishag75