July 26, 2016

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 1 – Routers and Reading

The first day of the real conference started with an excellent overview of what one can do with TeX, spanning from traditional scientific journal styles to generating router configuration for cruising ships.

All this was crowned with an invited talk my Kevin Larson from Microsoft’s typography department on how to support reading comprehension.

Pavneet Aurora – Opening: Passport to the TeX canvas

Pavneet, our never-sleeping host and master of organization, opened the conference with a very philosophical introduction, touching upon a wide range of topics ranging from Microsoft, Twitter to the beauty of books, pages, and type. I think at some point he even mentioned TeX, but I can’t remember for sure. His words put up a very nice and all-inclusive stage, a community that is open to all kind of influences with any disregard or prejudice. Let us hope that is reflects reality. Thanks Pavneet.

Geoffrey Poore – Advances in PythonTeX

Our first regular talk was by Geoffrey reporting on recent advances in PythonTeX, a package that allows including python code in your TeX document. Starting with an introduction to PythonTeX, Geoggrey reports about an improved verbatim environment, fvextra, which patches fancyvrb, and improved interaction between tikz and PythonTeX.

As I am a heavy user of listings for my teaching on algebraic specification languages, I will surely take a look at this package and see how it compares to listings.

Stefan Kottwitz – TeX in industry I: Programming Cisco network switches using TeX

Next was Stefan from Lufthansa Industry Solutions, who reported first about his working environment, Cruise Ships with a very demanding IT infrastructure he has to design and implement. Then he introduced us to his way of generating IP configurations for all the devices using TeX. The reason he chose this method is that it allows him to generate at the same time proper documentation.

It was surprising for me to hear that by using TeX he could far more efficiently and quicker produce well designed and easily accessible documentation, which helped both the company as well as made the clients happy!

Stefan Kottwitz – TeX in industry II: Designing converged network solutions

After a coffee break, Stefan continued his exploration into industrial usage of TeX, this time about using tikz to generate graphics representing the network topology on the ships.

Boris Veytsman – Making ACM LaTeX styles

Next up was Boris which brought us back to traditional realms of TeX when he guided us into the abyss of ACM LaTeX styles he tried to maintain for some time, until he plunged into a complete rewrite of the styles.

Frank Mittelbach – Alice goes floating — global optimized pagination including picture placements

The last talk before lunch (probably a strategic placement, otherwise Frank would continue for hours and hours) was Frank on global optimization of page breaks. Frank showed us what can and can not be done with current LaTeX, and how to play around with global optimization of pagination, using Alice in Wonderland as running example. We can only hope that his package is soon available in an easily consumable version to play around.

Thai lunch

Pavneet has organized three different lunch-styles for the three days of the conference, today’s was Thai with spring rools, fried noodles, one kind of very interesting orange noodles, and chicken something.

Michael Doob – baseball rules summary

After lunch Michael gave us an accessible explanation of the most arcane rules a game can have – the baseball rules – by using pseudo code. I think the total number of loc needed to explain the overall rules would fill more pages than the New York phonebook, so I am deeply impressed by all those who can understand these rules. Some of us even wandered off in the late afternoon to see a match with life explanations of Michael.

Amartyo Banerjee, S.K. Venkatesan – A Telegram bot for printing LaTeX files

Next up was Amartyo who showed a Telegram (as in messenger application) bot running on a Raspberry Pi, that receives (La)TeX files and sends back compiled PDF files. While it is not ready for consumption (If you sneeze the bot will crash!), it looks like a promising application. Furthermore, it is nice to see how open APIs (like Telegram) can spur development of useful tools, while closed APIs (including threatening users, like WhatApp) hinders it.

Norbert Preining – Security improvements in the TeX Live Manager and installer

Next up was my own talk about beefing up the security of TeX Live by providing integrity and authenticity checks via GnuPG, a feature that has been introduced with the recent release of TeX Live 2016.

The following discussion gave me several good idea on how to further improve security and usability.

Arthur Reutenauer -The TeX Live M sub-project (and open discussion)

Arthur presented the TeX Live M (where the M stands supposedly for Mojca, who couldn’t attend unfortunately) project: Their aim is to provide a curated and quality verified sub-part of TeX Live that is sufficiently complete for many applications, and easier for distributors and packagers.

We had a lively discussion after Arthur’s short presentation, mostly about why TeX Live does not have a “on-the-fly” installation like MikTeX. I insisted that this is already possible, using the “tex-on-the-fly” package which uses the mktextex infrastructure, but also caution against using it by default due to delays induced by repeatably reading the TeX Live database. I think this is a worth-while project for someone interested in learning the internals of TeX Live, but I am not sure whether I want to invest time into this feature.

Another discussion point was about a testing infrastructure, which I am currently working on. This is in fact high on my list, to have some automatic minimal functionality testing – a LaTeX package should at least load!

Kevin Larson – Reading between the lines: Improving comprehension for students

Having a guest from Microsoft is rather rare in our quite Unix-centered environment, so big thanks to Pavneet again for setting up this contact, and big thanks to Kevin for coming.

Kevin gave us a profound introduction to reading disabilities and how to improve reading comprehension. Starting with an excursion into what makes a font readable and how Microsoft develops optimally readable fonts, he than turned to reading disabilities like dyslexia, and how markup of text can increase students comprehension rate. He also toppled my long-term believe that dyslexia is connected to the similar shape of letters which are somehow visually malprocessed – this was the scientific status from the 1920ies till the 70ies, but since then all researchers have abandoned this interpretation and dyslexia is now linked to problems linking shape to phonems.

Kevin did an excellent job with a slightly difficult audience – some people picking about grammer differences between British and US English and permanently derailing the discussion, and even more the high percentage of typographically somehow specially tasted participants.

After the talk I had a lengthy discussion with Kevin about if/how this research can be carried over to non-Roman writing systems, in particular Kanji/Hanzi based writing systems, where dyslexia shows itself probably in different context. Kevin also mentioned that they want to add interword space to Chinese to help learners of Chinese (children, foreigners) to better parse, and studies showed that this helps a lot in comprehension.

On a meta-level, this talk bracketed with the morning introduction by Pavneet, describing an open environment with stimulus back and forth in all directions. I am very happy that Kevin took the pain to come in his tight schedule, and I hope that the future will bring better cooperation – at the end we are all working somehow on the same front – only the the tools differ.

izakaya-sake-partyAfter the closing of the session, one part of our group went off to the baseball match, while another group dived into a Japanese-style Izakaya where we managed to kill huge amounts of sake and quite an amount of food. The photo shows me after the first bottle of sake, while just seeping on an intermediate small amount of genshu (kind of strong undiluted sake) before continuing to the next bottle.

An interesting and stimulating first day of TUG, and I am sure that everyone was looking forward to day 2.

26 July, 2016 11:20AM by Norbert Preining

July 25, 2016

Simon Désaulniers

[GSOC] Week 8&9 Report

Week 8

This particular week has been tiresome as I did catch a cold ;). I did come back from Cape Town where debconf taking place. My arrival at Montreal was in the middle of the week, so this week is not plenty of news…

What I’ve done

I have synced myself with my coworker Nicolas Reynaud, who’s working on building the indexation system over the DHT. We have worked together on critical algorithms: concurrent maintenance of data in the trie (PHT).

Week 9

What I’ve done

Since my mentor, who’s also the main author of OpenDHT, was gone for presenting Ring at the RMLL, I’ve been attributed tasks that needed attention quickly. I’ve been working on making OpenDHT run properly when compiled with some failing version of Apple’s LLVM. I’ve had the pleasure of debugging obscure runtime errors that you don’t get depending on the compiler you use, but I mean very obscure.

I have released OpenDHT 0.6.2! This release was meant to fix a critical functionality bug that would arise if one of the two routing table (IPv4, IPv6) was empty. This was really critical for Ring to have the 0.6.2 version because it is not rare that a user connects to some router not giving IPv6 address.

Finally, I have fixed some minor bugs in my work on the queries.

25 July, 2016 11:41PM

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 0 – Books and Beers

The second pre-conference day was dedicated to books and beers, with a visit to an exquisite print studio, and a beer tasting session at one of the craft breweries in Canada. In addition we could grab a view into the Canadian lifestyle by visiting Pavneet’s beautiful house in the countryside, as well as enjoying traditional style pastries from a bakery.

Heidelberg printing machine at Porcupine's quilt

In short, a perfect combination for us typography and beer savvy freaks!

This morning we had somehow an early start from the hotel. Soon the bus left downtown Toronto and entered countryside Ontario, large landscapes filled with huge (for my Japanese feeling) estates and houses, separated by fields, forests and wild landscape. Very beautiful and inviting to live there. On our way to the printing workshop we stopped at Pavneet’s house for a very short visit of the exterior, which includes mathematics in the bricking. According to Pavneet, his kids hate to see math on the wall – I would be proud to have it.

Pavneet's house is hiding some mathematics

A bit further on we entered into Erin, where the Porcupine’s Quill is located. A small building along the street, one could easily oversee that rare jewel! In addition considering that according to the owners, Google Maps has a bad error which would lead you to a completely different location. This printing workshop, led by Tim and Elke Inkster, is producing books in a traditional style using an old Heidelberg Offset printing machine.

Entrance to the Porcupine's Quill, a local bookshop doing excellent printing

Elke introduced us to the sewing of folded signatures together with a lovely old sewing machine. It was the first time I actually saw one in action.

Sewn signatures

Tim, the head master of the printing shop, first entertained us with stories about Chinese publishers visiting them in the old cold-war times, before diving into explanations of the actual machines around, like the Heidelberg offset printing machine.

Master Tim is showing us offset technique

In the back of the basement of the little studio there is also a huge folding machine, which cuts up the big signatures of 16 pages and folds them into bundles. An impressive example of tricky engineering.

The folding machine creates from a printed signature 16 pages in proper order.

Due to the small size of the printing studio, our groups were actually split into two groups, and while the other group got its guided tour, we grabbed coffee and traditional cookies and pastries from the nearby Holtom’s bakery. Loads of nice pastries with various filling, my favorite being the slightly salty cherry pie, and above all the rhubarb-raspberry pie.

Nearby old-style bakery, selling Viennese-style "Kaisersemmel", calling them "Kaiser buns". There must be an Austrian hiding somewhere around.

To my absolute astonishment I also found there a Viennese “Kaisersemmel“, called “Kaiser bun” here, but keeping the shape and the idea (but unfortunately not the crispy cracky quality of the original in Vienna). Of course I got two of them, together with a nice jam from the region, and enjoyed these “Viennese breakfast” the next day morning.

Viennese breakfast from the Bakery near Porcupine's Quill

Leaving the Quill we headed for a lunch in a nice pizzeria (I got Pizza Toscana) which also served excellent local beer – how would I like to have something like this over in Japan! Our last stop on this excursion was Stone Hammer Brewery, ne of the most famous craft breweries in Canada.

One of the top craft breweries in Canada, the Stone Hammer

keep-calmAlthough they wont win a prize for typography (besides one page of a coaster there that carried a nice pun), their beers are exquisite. We got five different beers to taste, plus extensive explanations on brewing methods and differences. Now I finally understand why most of the new craft breweries in Japan are making Ales: Ales don’t need a long process and are ready for sale in rather short time, compared to e.g. lagers.)

Explanations of the the secrets of beer brewing

Nothing to add to this poster found in the Stone Hammer Brewery!
Also at the Stone Hammer Brewery I spotted this very nice poster on the wall of the toilet. And I cannot agree more, everything can easily be discussed over a good beer – it calms down aversions, makes even the worst enemies friends, and is healthy for both the mind and body.

Filled with excellent beer, some of us (notably an unnamed US TeXnician and politician), stacked up on beers to carry home. I was very tempted to get a huge batch, but putting cans into an airplane seems to be not the optimal idea. Since we are talking cans, I was surprised to hear that many craft beer brewers nowadays prefer cans due to their better protection of the beer from light and oxygen, both killers of good beer.

Before leaving we took a last look at the Periodic Table of Beer Types, which left me in awe about how much I don’t know and I probably never will know. In particular, I heard the first time of a “Vienna style beer” – Vienna is not really famous for beer, better to say, it is infamous. So maybe this is a different Vienna than my home town that is meant here.

Lots to study here, the Periodic Table of Beers

Another two hour bus ride brought us back to Toronto, where we met with other participants at the reception in a restaurant of Mediterranean cuisine, where I could enjoy for the first time in years a good Tahina and Humus.

All around another excellent day, now I only would like to have two days of holidays, guess I need to relax in the lectures starting from tomorrow.

25 July, 2016 06:09PM by Norbert Preining

Sven Hoexter

me vs terminal emulator

I think my demands for a terminal emulator are pretty basic but none the less I run into trouble every now and then. This time it was a new laptop and starting from scratch with an empty $HOME and the current Debian/testing instead of good old Jessie.

For the last four or five years I've been a happy user of gnome-terminal, configured a mono space font, a light grey background with black text color, create new tabs with Ctrl-n, navigate the tabs with Ctrl-Left and Ctrl-Right, show no menubar, select URLs with double click. Suited me well with my similarly configured awesome window manager, where I navigate with Mod4-Left and Mod4-Right between the desktops on the local screen and only activate a handful of the many default tiling modes.

While I could get back most of my settings, somehow all cited gconf kung-foo to reconfigure the URL selection pattern in gnome-terminal failed, and copy&pasting URLs from the terminal was a pain in the ass. Long story short I now followed the advice of a coworker to just use the xfce4-terminal.

That still required a few tweaks to get back to do what I want it to do. To edit the keybindings you've to know that you've to use the GTK way and edit them within in the menu while selecting the menu entry. But you've to allow that first (why oh why?):

echo "gtk-can-change-accels=1" >> ~/.gtkrc-2.0

Fair enough that is documented. Changing the keybinding generates fancy things in ~/.config/xfce4/terminal/accels.scm in case you plan to hand edit a few more of them.

I also edited a few things in ~/.config/xfce4/terminal/terminalrc:


So I guess I can remove gnome-terminal for now and stay with another GTK2 application. Doesn't feel that good but well at least it works.

25 July, 2016 04:44PM

Mike Hommey

Announcing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

25 July, 2016 08:38AM by glandium

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day -1 – Wines and Waters

This years TUG is held in Toronto, Canada, and our incredible host Pavneet has managed to put together an busy program of excursions and events around the real conference. The -1st day (yeah, you read right, the minus-first day), that is two days before the actual conference starts, was dedicated to an excursion to enjoying wines at the wine estate Château des Charmes, followed by a visit to the Niagara falls.

The amount of water running down the Horseshoe Falls is incredible

What should I say, if the first thing after getting out of the bus is a good wine, then there is nothing to go wrong …

I arrived in Toronto already two days earlier in the late afternoon, and spend Friday relaxing, recovering from the long flight, walking the city a bit, trying to fix my cold, and simply going slowly. Saturday morning we met already at a comfortable 10am in the morning (still to early for me, due to jet lag and a slightly late evening before), but the first 2h of bus drive allowed us to relax. Our first stop was the Château des Charmes, and impressive building surrounded by vineyards.

Entrance to the Château des Charmes

We were immediately handed over to a sandwich lunch with white, red, and ice wine. Good start! And although the breakfast wasn’t that long time ago, the delicious sandwiches (at least the vegetarian ones I tried) were good foundation for the wine.

Lunch impressions at Château des Charmes

After having refilled our energy reserves, we are ready to start our tour. Our guide, very, if not over, enthusiastic, explains us that practically everything related to wine in Canada has been started at this Château from the current owner – the Château system where farmer and wine producer are the same, import of European grapes, winter protection methods, wine making – I was close to forgetting our Roman and Greek ancestors. At least she admitted that the Ice wine was brought over by an Austrian – but perfection was done here, were the controls of the government are much stricter than anywhere else … hmmm, somehow I cannot completely believe all this narrative, but at least it is enjoyable. So now that we know all about the history, we dive into the production process area, and the barrel space, always accompanied with extensive comments and (self-)praise.

Enjoying the explanations of our very enthusiastic guide

After this exhaustive and exhausting round, we are guided back to the patio to taste another three different wines, a white (bit too warm, not so much my taste), a rose (very good), and a red made from a new grape variety that has mutated first here on the Château (interesting). As I didn’t have enough, I tried to get something out of the big containers directly, but without success!

Me trying to get a full load of wine

Happy and content, and after passing through the shopping area, we are boarding the bus to continue direction of the Niagara Falls. Passing by at quite some nice houses of definitely quite rich people (although Pavneet told me that houses in Toronto are far more expensive than those here – how can someone afford this?), we first have a view onto the lower Niagara river. A bit further one we are let out to see huge whirlpools in the river, where boat tours are bringing sightseers on a rough ride ride into the pool.

Whirlpools along the Niagara river

Only a slight bit further on we finally reach the Niagara falls, with the great view onto the American falls in full power, and the Horseshoe Falls further up.

Horseshoe Falls in view

We immediately boarded a boat tour making a short trip to the Horseshoe Fall. Lining up with hundreds and hundreds of other spectators, we prepare for the splash with red rain wear (the US side uses blue, not that any side would rescue a wrong person and create an illegal immigrant!). The trip passes first under the American Falls and continues right into the mist that fills all the area in the middle of the Horseshoe falls. Spectacular impression with walls of water falling down on both sidess.

Panorama of the Horseshoe Falls

Returned from the splash and having dried our feet, we walk along the ridge to see the Horseshoe falls from close up. The amount of water falling down these falls is incredible, and so is the erosion that creates the brown foam on the water down in the pool, made up from pulverized limestone. Blessed as we were, the sun was laughing all the day and we got a nice rainbow right in the fall.

Horseshoe Falls with boat and rainbow, lovely.

The surroundings of the fall are less impressive – Disney land? Horror cabinet? Jodel bar? A wild mixture of amusement park style locations squeezed together and overly full with people – as if enjoying the nature itself would not be enough. All immersed by ever blasting loudspeaker music.

Torture around Niagara falls

The only plus point I could find in this encampment of forced happiness was a local craft beer brewer where one could taste 8 different beers – I made it only to four, though.

Beer tasting at Niagara Brewery, IPA, Amber ale, Lager, and some Light Fruity Radler beer.

Finally night was falling in and we moved down to the falls again to enjoy the illumination of the falls.

Night view onto Niagara falls

After this wonderful finish we boarded the bus and back to Toronto, where we arrived around mid-night. A long but very pleasurable Day Minus One!

Further photos can be seen at the gallery TUG 2016 – Day Minus One.

25 July, 2016 05:23AM by Norbert Preining

hackergotchi for Martin Michlmayr

Martin Michlmayr

Debian on Jetson TK1

Debian on Jetson TK1

I became interested in running Debian on NVIDIA's Tegra platform recently. NVIDIA is doing a great job getting support for Tegra upstream (u-boot, kernel, X.org and other projects). As part of ensuring good Debian support for Tegra, I wanted to install Debian on a Jetson TK1, a development board from NVIDIA based on the Tegra K1 chip (Tegra 124), a 32-bit ARM chip.

Ian Campbell enabled u-boot and Linux kernel support and added support in the installer for this device about a year ago. I updated some kernel options since there has been a lot of progress upstream in the meantime, performed a lot of tests and documented the installation process on the Debian wiki. Wookey made substantial improvements to the wiki as well.

If you're interested in a good 32-bit ARM development platform, give Debian on the Jetson TK1 a try.

There's also a 64-bit board. More on that later...

25 July, 2016 01:31AM

July 24, 2016

Tom Marble


J'ai gagné le Tour de Crosstown 2016!

Everyone knows that today the finish line for Le Tour de France was crossed on Les Champs-Élysées in Paris... And if you haven't seen some of the videos I highly recommend checking out the onboard camera views and the landscapes! Quel beau pays

I'm happy to let you know that today I won the Tour de Crosstown 2016 which is the cycling competition at Lifetime Crosstown inspired by and concurrent to Le Tour de France. There were about twenty cyclists competing to see who could earn the most points -- by attending cycling class bien sûr. I earned the maillot jaune with 23 points and my next closest competitor had 16 points (with the peloton far behind). But that's just part of the story.

Tour de Crosstown 2016

For some time I've been coming to Life Time Fitness at Crosstown for yoga (in Josefina's class) and playing racquetball with my friend David. The cycling studio is right next to the racquetball courts and there's been a class on Saturday's at the same time we usually play. I told David that it looked like fun and he said, having tried it, that it is fun (and a big workout). In early June David got busy and then had an injury that has kept him off the court ever since. So one Saturday morning I decided to try cycling.

I borrowed a heart rate monitor (but had no idea what it was for) and tried to bike along in my regular gym shorts, shoes and a t-shirt. Despite being a cycling newbie I was immediately captured by Alison's music and enthusiasm. She's dancing on her bike and you can't help but lock in the beat. Of course that's just after she tells you to dial up the resistance... and the sweat just pours out!

I admit that workout hit me pretty hard, but I had to come back and try the 5:45 am Wednesday EDGE cycle class (gulp). Despite what sounds like a crazy impossible time to get out and on a bike it actually works out super well. This plan requires one to up-level one's organization and after the workout I can assure you that you're fully awake and charged for the day!

Soon I invested in my own heart rate monitor. Then I realized it would work so much better if I had a metabolic assessment to tune my aerobic and anaerobic training zones. While I signed up for the assessment I decided to work with May as my personal trainer. In addition to helping me with my upper body (complementing the cycling) May is a nutritionist and has helped me dial in this critical facet of training. Even though I'm still working to tune my diet around my workouts, I've already learned a lot by using My Fitness Pal and, most importantly, I have a whole new attitude about food.

Pour les curieux, la nutritioniste maison s'est absentée en France pendant le mois de juillet.

Soon I would invest in bike shoes, jerseys and shorts and begin to push myself into the proper zones during workouts and fuel my body properly afterwords. All these changes have led to dramatic weight loss \o/

A few of you know that the past two years have involved a lot of personal hardship. Upon reflection I have come to appreciate that things in my life that I can actually control are a massive opportunity. I decided that fixing my exercise and nutrition were the opportunities I want to focus on. A note for for my Debian friends... I'm sorry to have missed you in Cape Town, but I hope to join you in Montréal next year.

So when the Tour de Crosstown started in July I decided this was the time for me to get serious. I want to thank all the instructors for the great workouts (and for all the calories I've left on the bike): Alison, Kristine, Olivia, Tasha, and Caroline!

The result of my lifestyle changes are hard to describe.. I feel an amazing amount of energy every day. The impact of prior back injury is now almost non-existent. And what range of motion I hadn't recovered from the previous summer's being "washing machined" by a 3 meter wave while body surfing at the beach in Hossegor is now fully working.

Now I'm thinking it's time to treat myself to a new bike :) I'm looking at large touring frames and am currently thinking of the Surly Disc Trucker. In terms of bike shops I've had a good experience with One on One and Grand Performance has come highly recommended. If anyone has suggestions for bikes, bike features, or good shops please let me know!

I would encourage everyone here in Minneapolis to join me as guest for a Wed morning 5:45am EDGE cycle class. I'm betting you'll have as much fun as a I do.. and I guarantee you will sweat! The challenge in waking up will pay off handsomely in making you energized for the whole day.

Let's bike allons-y!

24 July, 2016 11:22PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


armadillo image

The second Armadillo release of the 7.* series came out a few weeks ago: version 7.200.2. And RcppArmadillo version is now on CRAN and uploaded to Debian. This followed the usual thorough reverse-dependecy checking of by now over 240 packages using it.

For once, I let it simmer a little preparing only a package update via the GitHub repo without preparing a CRAN upload to lower the update frequency a little. Seeing that Conrad has started to release 7.300.0 tarballs, the time for a (final) 7.200.2 upload was now right.

Just like the previous, it now requires a recent enough compiler. As g++ is so common, we explicitly test for version 4.6 or newer. So if you happen to be on an older RHEL or CentOS release, you may need to get yourself a more modern compiler. R on Windows is now at 4.9.3 which is decent (yet stable) choice; the 4.8 series of g++ will also do. For reference, the current LTS of Ubuntu is at 5.4.0, and we have g++ 6.1 available in Debian testing.

This new upstream release adds new indexing helpers, additional return codes on some matrix transformations, increased speed for compound expressions via vectorise, corrects some LAPACK feature detections (affecting principally complex number use under OS X), and a rewritten sample() function thanks to James Balamuta.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Changes in this release (and the preceding GitHub-only release are as follows:

Changes in RcppArmadillo version (2016-07-22)

  • Upgraded to Armadillo release 7.200.2

  • The sampling extension was rewritten to use Armadillo vector types instead of Rcpp types (PR #101 by James Balamuta)

Changes in RcppArmadillo version (2016-06-06)

  • Upgraded to Armadillo release 7.200.1

    • added .index_min() and .index_max()

    • expanded ind2sub() to handle vectors of indices

    • expanded sub2ind() to handle matrix of subscripts

    • expanded expmat(), logmat() and sqrtmat() to optionally return a bool indicating success

    • faster handling of compound expressions by vectorise()

  • The configure code now (once again) sets the values for the LAPACK feature #define correctly.

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 July, 2016 07:35PM

Elena 'valhalla' Grandi

One Liberated Laptop

One Liberated Laptop


After many days of failed attempts, yesterday @Diego Roversi finally managed to setup SPI on the BeagleBone White¹, and that means that today at our home it was Laptop Liberation Day!

We took the spare X200, opened it, found the point we were on in the tutorial installing libreboot on x200 https://libreboot.org/docs/install/x200_external.html, connected all of the proper cables on the clip³ and did some reading tests of the original bios.


While the tutorial mentioned a very conservative setting (512kHz), just for fun we tried to read it at different speed and all results up to 16384 kHz were equal, with the first failure at 32784 kHz, so we settled on using 8192 kHz.

Then it was time to customize our libreboot image with the right MAC address, and that's when we realized that the sheet of paper where we had written it down the last time had been put in a safe place… somewhere…

Luckily we also had taken a picture, and that was easier to find, so we checked the keyboard map², followed the instructions to customize the image https://libreboot.org/docs/hcl/gm45_remove_me.html#ich9gen, flashed the chip, partially reassembled the laptop, started it up and… a black screen, some fan noise and nothing else.

We tried to reflash the chip (nothing was changed), tried the us keyboard image, in case it was the better tested one (same results) and reflashed the original bios, just to check that the laptop was still working (it was).

It was lunchtime, so we stopped our attempts. As soon as we started eating, however, we realized that this laptop came with 3GB of RAM, and that surely meant "no matching pairs of RAM", so just after lunch we reflashed the first image, removed one dimm, rebooted and finally saw a gnu-hugging penguin!

We then tried booting some random live usb key https://tails.boum.org/ we had around (failed the first time, worked the second and further one with no changes), and then proceeded to install Debian.

Running the installer required some attempts and a bit of duckduckgoing: parsing the isolinux / grub configurations from the libreboot menu didn't work, but in the end it was as easy as going to the command line and running:

linux (usb0)/install.amd/vmlinuz
initrd (usb0)/install.amd/initrd.gz

From there on, it was the usual debian installation and a well know environment, and there were no surprises. I've noticed that grub-coreboot is not installed (grub-pc is) and I want to investigate a bit, but rebooting worked out of the box with no issue.

Next step will be liberating my own X200 laptop, and then if you are around the @Gruppo Linux Como area and need a 16 pin clip let us know and we may bring everything to one of the LUG meetings⁴

¹ yes, white, and most of the instructions on the interwebz talk about the black, which is extremely similar to the white… except where it isn't

² wait? there are keyboard maps? doesn't everybody just use the us one regardless of what is printed on the keys? Do I *live* with somebody who doesn't? :D

³ the breadboard in the picture is only there for the power supply, the chip on it is a cheap SPI flash used to test SPI on the bone without risking the laptop :)

⁴ disclaimer: it worked for us. it may not work on *your* laptop. it may brick it. it may invoke a tentacled monster, it may bind your firstborn son to a life of servitude to some supernatural being. Whatever happens, it's not our fault.

24 July, 2016 06:35PM by Elena ``of Valhalla''

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/01-29

seems I've neglected both my blog & my RC bug fixing activities in the last months. – anyway, since I still keep track of RC bugs I worked on, I thought I might as well publish the list:

  • #798023 – src:cssutils: "cssutils: FTBFS with Python 3.5"
    sponsor NMU by Chris Knadle, upload to DELAYED/2
  • #800303 – src:libipc-signal-perl: "libipc-signal-perl: Please migrate a supported debhelper compat level"
    bump debhelper compat level, upload to DELAYED/5
  • #808331 – src:libpgplot-perl: "libpgplot-perl: needs manual rebuild for Perl 5.22 transition"
    manually build+upload packages (pkg-perl)
  • #809056 – src:xca: "xca: FTBFS due to CPPFLAGS containing spaces"
    sponsor NMU by Chris Knadle, upload to DELAYED/5
  • #810017 – src:psi4: "psi4: FTBFS with perl 5.22"
    propose a patch
  • #810707 – src:libdbd-sqlite3-perl: "libdbd-sqlite3-perl: FTBFS: t/virtual_table/21_perldata_charinfo.t (Wstat: 512 Tests: 4 Failed: 1)"
    investigate a bit (pkg-perl)
  • #810710 – src:libdata-objectdriver-perl: "libdata-objectdriver-perl: FTBFS: t/02-basic.t (Wstat: 256 Tests: 67 Failed: 1)"
    add patch to handle sqlite 3.10 in test suite's version comparison (pkg-perl)
  • #810900 – libanyevent-rabbitmq-perl: "libanyevent-rabbitmq-perl: Can't locate object method "bind_exchange" via package "AnyEvent::RabbitMQ::Channel""
    add info, versioned close (pkg-perl)
  • #810910 – libmath-bigint-gmp-perl: "libmath-bigint-gmp-perl: FTBFS: test failures with newer libmath-bigint-perl"
    upload new upstream release (pkg-perl)
  • #810912 – libx11-xcb-perl: "libx11-xcb-perl: missing dependency on libxs-object-magic-perl"
    update dependencies (pkg-perl)
  • #813420 – src:libnet-server-mail-perl: "libnet-server-mail-perl: FTBFS: error: Can't call method "peerhost" on an undefined value at t/starttls.t line 78."
    close, as the underlying problem is fixed (pkg-perl)
  • #814730 – src:libmath-mpfr-perl: "libmath-mpfr-perl: FTBFS on most architectures"
    upload new upstream release (pkg-perl)
  • #815775 – zeroc-ice: "Build-Depends on unavailable packages mono-gmcs libmono2.0-cil"
    sponsor NMU by Chris Knadle, upload to DELAYED/2
  • #816527 – src:libtest-file-contents-perl: "libtest-file-contents-perl: FTBFS with Text::Diff 1.44"
    upload new upstream release (pkg-perl)
  • #816638 – mhonarc: "mhonarc: fails to run with perl5.22"
    propose a patch in the BTS
  • #817528 – src:libemail-foldertype-perl: "libemail-foldertype-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817529 – src:libimage-base-bundle-perl: "libimage-base-bundle-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817530 – src:liberror-perl: "liberror-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817531 – src:libimage-info-perl: "libimage-info-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817647 – src:randomplay: "randomplay: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #818924 – libjson-webtoken-perl: "libjson-webtoken-perl: missing dependency on libmodule-runtime-perl"
    add missing (build) dependency (pkg-perl)
  • #819787 – libdbix-class-schema-loader-perl: "libdbix-class-schema-loader-perl: FTBFS: t/10_01sqlite_common.t failure"
    close bug, works again with recent sqlite3 (pkg-perl)
  • #821412 – libnet-rblclient-perl: "libnet-rblclient-perl: Net::DNS 1.01 breaks Net::RBLClient"
    add patch (pkg-perl)
  • #823310 – libnanomsg-raw-perl: "libnanomsg-raw-perl: FTBFS: test failures"
    add patch from upstream git repo (pkg-perl)
  • #824046 – src:libtkx-perl: "libtkx-perl: FTBFS: Tcl error 'Foo at /usr/lib/x86_64-linux-gnu/perl5/5.22/Tcl.pm line 585.\n' while invoking scalar result call"
    first investigation (pkg-perl)
  • #824143 – libperinci-sub-normalize-perl: "libperinci-sub-normalize-perl: FTBFS: Can't locate Sah/Schema/Rinci.pm in @INC"
    upload new upstream version (pkg-perl)
  • #825366 – src:libdist-zilla-plugin-ourpkgversion-perl: "libdist-zilla-plugin-ourpkgversion-perl: FTBFS: Can't locate Path/Class.pm in @INC"
    add missing dependency (pkg-perl)
  • #825424 – libdist-zilla-plugin-test-podspelling-perl: "libdist-zilla-plugin-test-podspelling-perl: FTBFS: Can't locate Path/Class.pm in @INC"
    first triaging, forward upstream, import new release (pkg-perl)
  • #825608 – libnet-jifty-perl: "libnet-jifty-perl: FTBFS: t/006-uploads.t failure"
    triaging, mark unreproducible (pkg-perl)
  • #825629 – src:libgd-perl: "libgd-perl: FTBFS: Could not find gdlib-config in the search path. "
    first triaging, forward upstream (pkg-perl)
  • #829064 – libparse-debianchangelog-perl: "libparse-debianchangelog-perl: FTBFS with new tidy version"
    patch TML template (pkg-perl)
  • #829066 – libparse-plainconfig-perl: "FTBFS: Can't modify constant item in scalar assignment"
    new upstream release (pkg-perl)
  • #829176 – src:libapache-htpasswd-perl: "libapache-htpasswd-perl: FTBFS: dh_clean: Please specify the compatibility level in debian/compat"
    add debian/compat, upload to DELAYED/2
  • #829409 – src:libhtml-tidy-perl: "libhtml-tidy-perl: FTBFS: Failed 7/21 test programs. 8/69 subtests failed."
    apply patches from Simon McVittie (pkg-perl)
  • #829668 – libparse-debianchangelog-perl: "libparse-debianchangelog-perl: FTBFS: Failed test 'Output of dpkg_str equal to output of dpkg-parsechangelog'"
    add patch for compatibility with new dpkg (pkg-perl)
  • #829746 – src:license-reconcile: "license-reconcile: FTBFS: Failed 7/30 test programs. 11/180 subtests failed."
    versioned close, already fixed in latest upload (pkg-perl)
  • #830275 – src:libgravatar-url-perl: "libgravatar-url-perl: accesses the internet during build"
    skip test which needs internet access (pkg-perl)
  • #830324 – src:libhttp-async-perl: "libhttp-async-perl: accesses the internet during build"
    add patch to skip tests with DNS queries (pkg-perl)
  • #830354 – src:libhttp-proxy-perl: "libhttp-proxy-perl: accesses the internet during build"
    skip tests which need internet access (pkg-perl)
  • #830355 – src:libanyevent-http-perl: "libanyevent-http-perl: accesses the internet during build"
    skip tests which need internet access (pkg-perl)
  • #830356 – src:libhttp-oai-perl: "libhttp-oai-perl: accesses the internet during build"
    add patch to skip tests with DNS queries (pkg-perl)
  • #830476 – src:libpoe-component-client-http-perl: "libpoe-component-client-http-perl: accesses the internet during build"
    update existing patch (pkg-perl)
  • #831233 – src:libmongodb-perl: "libmongodb-perl: build hangs with sbuild and libeatmydata"
    lower severity (pkg-perl)
  • #832361 – src:libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: Failed 2/22 test programs. 2/356 subtests failed."
    upload new upstream release (pkg-perl)

24 July, 2016 04:34PM

Russ Allbery

Review: The Run of His Life

Review: The Run of His Life, by Jeffrey Toobin

Publisher: Random House
Copyright: 1996, 1997
Printing: 2015
ISBN: 0-307-82916-2
Format: Kindle
Pages: 498

The O.J. Simpson trial needs little introduction to anyone who lived through it in the United States, but a brief summary for those who didn't.

O.J. Simpson is a Hall of Fame football player and one of the best running backs to ever play the game. He's also black, which is very relevant much of what later happened. After he retired from professional play, he became a television football commentator and a spokesperson for various companies (particularly Hertz, a car rental business). In 1994, he was arrested for the murder of two people: his ex-wife, Nicole Brown Simpson, and Ron Goldman (a friend of Nicole's). The arrest happened after a bizarre low-speed police chase across Los Angeles in a white Bronco that was broadcast live on network television. The media turned the resulting criminal trial into a reality TV show, with live cable television broadcasts of all of the court proceedings. After nearly a full year of trial (with the jury sequestered for nine months — more on that later), a mostly black jury returned a verdict of not guilty after a mere four hours of deliberation.

Following the criminal trial, in an extremely unusual legal proceeding, Simpson was found civilly liable for Ron Goldman's death in a lawsuit brought by his family. Bizarre events surrounding the case continued long afterwards. A book titled If I Did It (with "if" in very tiny letters on the cover) was published, ghost-written but allegedly with Simpson's input and cooperation, and was widely considered a confession. Another legal judgment let the Goldman family get all the profits from that book's publication. In an unrelated (but also bizarre) incident in Las Vegas, Simpson was later arrested for kidnapping and armed robbery and is currently in prison until at least 2017.

It is almost impossible to have lived through the O.J. Simpson trial in the United States and not have formed some opinion on it. I was in college and without a TV set at the time, and even I watched some of the live trial coverage. Reactions to the trial were extremely racially polarized, as you might have guessed. A lot of black people believed at the time that Simpson was innocent (probably fewer now, given subsequent events). A lot of white people thought he was obviously guilty and was let off by a black jury for racial reasons. My personal opinion, prior to reading this book, was a common "third way" among white liberals: Simpson almost certainly committed the murders, but the racist Los Angeles police department decided to frame him for a crime that he did commit by trying to make the evidence stronger. That's a legitimate reason in the US justice system for finding someone innocent: the state has an obligation to follow correct procedure and treat the defendant fairly in order to get a conviction. I have a strong bias towards trusting juries; frequently, it seems that the media second-guesses the outcome of a case that makes perfect sense as soon as you see all the information the jury had (or didn't have).

Toobin's book changed my mind. Perhaps because I wasn't watching all of the coverage, I was greatly underestimating the level of incompetence and bad decision-making by everyone involved: the prosecution, the defense, the police, the jury, and the judge. This court case was a disaster from start to finish; no one involved comes away looking good. Simpson was clearly guilty given the evidence presented, but the case was so badly mishandled that it gave the jury room to reach the wrong verdict. (It's telling that, in the far better managed subsequent civil case, the jury had no trouble reaching a guilty verdict.)

The Run of His Life is a very detailed examination of the entire Simpson case, from the night of the murder through the end of the trial and (in an epilogue) the civil case. Toobin was himself involved in the media firestorm, breaking some early news of the defense's decision to focus on race in The New Yorker and then involved throughout the trial as a legal analyst, and he makes it clear when he becomes part of the story. But despite that, this book felt objective to me. There are tons of direct quotes, lots of clear description of the evidence, underlying interviews with many of the people involved to source statements in the book, and a comprehensive approach to the facts. I think Toobin is a bit baffled by the black reaction to the case, and that felt like a gap in the comprehensiveness and the one place where he might be accused of falling back on stereotypes and easy judgments. But other than hole, Toobin applies his criticism even-handedly and devastatingly to all parties.

I won't go into all the details of how Toobin changed my mind. It was a cumulative effect across the whole book, and if you're curious, I do recommend reading it. A lot was the detailed discussion of the forensic evidence, which was undermined for the jury at trial but looks very solid outside the hothouse of the case. But there is one critical piece that I would hope would be handled differently today, twenty years later, than it was by the prosecutors in that case: Simpson's history of domestic violence against Nicole. With what we now know about patterns of domestic abuse, the escalation to murder looks entirely unsurprising. And that history of domestic abuse was exceedingly well-documented: multiple external witnesses, police reports, and one actual prior conviction for spousal abuse (for which Simpson did "community service" that was basically a joke). The prosecution did a very poor job of establishing this history and the jury discounted it. That was a huge mistake by both parties.

I'll mention one other critical collection of facts that Toobin explains well and that contradicted my previous impression of the case: the relationship between Simpson and the police.

Today, in the era of Black Lives Matter, the routine abuse of black Americans by the police is more widely known. At the time of the murders, it was less recognized among white Americans, although black Americans certainly knew about it. But even in 1994, the Los Angeles police department was notorious as one of the most corrupt and racist big-city police departments in the United States. This is the police department that beat Rodney King. Mark Fuhrman, one of the police officers involved in the case (although not that significantly, despite his role at the trial), was clearly racist and had no business being a police officer. It was therefore entirely believable that these people would have decided to frame a black man for a murder he actually committed.

What Toobin argues, quite persuasively and with quite a lot of evidence, is that this analysis may make sense given the racial tensions in Los Angeles but ignores another critical characteristic of Los Angeles politics, namely a deference to celebrity. Prior to this trial, O.J. Simpson largely followed the path of many black athletes who become broadly popular in white America: underplaying race and focusing on his personal celebrity and connections. (Toobin records a quote from Simpson earlier in his life that perfectly captures this approach: "I'm not black, I'm O.J.") Simpson spent more time with white businessmen than the black inhabitants of central Los Angeles. And, more to the point, the police treated him as a celebrity, not as a black man.

Toobin takes some time to chronicle the remarkable history of deference and familiarity that the police showed Simpson. He regularly invited police officers to his house for parties. The police had a long history of largely ignoring or downplaying his abuse of his wife, including not arresting him in situations that clearly seemed to call for that, showing a remarkable amount of deference to his side of the story, not pursuing clear violations of the court judgment after his one conviction for spousal abuse, and never showing much inclination to believe or protect Nicole. Even on the night of the murder, they started following a standard playbook for giving a celebrity advance warning of investigations that might involve them before the news media found out about them. It seems clear, given the evidence that Toobin collected, that the racist Los Angeles police didn't focus that animus at Simpson, a wealthy celebrity living in Brentwood. He wasn't a black man in their eyes; he was a rich Hall of Fame football player and a friend.

This obviously raises the question of how the jury could return an innocent verdict. Toobin provides plenty of material to analyze that question from multiple angles in his detailed account of the case, but I can tell you my conclusion: Judge Lance Ito did a horrifically incompetent job of managing the case. He let the lawyers wander all over the case, interacted bizarrely with the media coverage (and was part of letting the media turn it into a daytime drama), was not crisp or clear about his standards of evidence and admissibility, and, perhaps worst of all, let the case meander on at incredible length. With a fully sequestered jury allowed only brief conjugal visits and no media contact (not even bookstore shopping!).

Quite a lot of anger was focused on the jury after the acquittal, and I do think they reached the wrong conclusion and had all the information they would have needed to reach the correct one. But Toobin touches on something that I think would be very hard to comprehend without having lived through it. The jury and alternate pool essentially lived in prison for nine months, with guards and very strict rules about contact with the outside world, in a country where compensation for jury duty is almost nonexistent. There were a lot of other factors behind their decision, including racial tensions and the sheer pressure from judging a celebrity case about which everyone has an opinion, but I think it's nearly impossible to underestimate the psychological tension and stress from being locked up with random other people under armed guard for three quarters of a year. It's hard for jury members to do an exhaustive and careful deliberation in a typical trial that takes a week and doesn't involve sequestration. People want to get back to their lives and families. I can only imagine the state I would be in after nine months of this, or how poor psychological shape I would be in to make a careful and considered decision.

Similarly, for those who condemned the jury for profiting via books and media appearances after the trial, the current compensation for jurors is $15 per day (not hour). I believe at the time it was around $5 per day. There are a few employers who will pay full salary for the entire jury service, but only a few; many cap the length at a few weeks, and some employers treat all jury duty as unpaid leave. The only legal requirement for employers in the United States is that employees that serve on a jury have their job held for them to return to, but compensation is pathetic, not tied to minimum wage, and employers do not have to supplement it. I'm much less inclined to blame the jurors than the system that badly mistreated them.

As you can probably tell from the length of this review, I found The Run of His Life fascinating. If I had followed the whole saga more closely at the time, this may have been old material, but I think my vague impressions and patchwork of assumptions were more typical than not among people who were around for the trial but didn't invest a lot of effort into following it. If you are like me, and you have any interest in the case or the details of how the US criminal justice system works, this is a fascinating case study, and Toobin does a great job with it. Recommended.

Rating: 8 out of 10

24 July, 2016 02:13AM

July 23, 2016

hackergotchi for Steve Kemp

Steve Kemp

A final post about the lua-editor.

I recently mentioned that I'd forked Antirez's editor and added lua to it.

I've been working on it, on and off, for the past week or two now. It's finally reached a point where I'm content:

  • The undo-support is improved.
  • It has buffers, such that you can open multiple files and switch between them.
    • This allows this to work "kilua *.txt", for example.
  • The syntax-highlighting is improved.
    • We can now change the size of TAB-characters.
    • We can now enable/disable highlighting of trailing whitespace.
  • The default configuration-file is now embedded in the body of the editor, so you can run it portably.
  • The keyboard input is better, allowing multi-character bindings.
    • The following are possible, for example ^C, M-!, ^X^C, etc.

Most of the obvious things I use in Emacs are present, such as the ability to customize the status-bar (right now it shows the cursor position, the number of characters, the number of words, etc, etc).

Anyway I'll stop talking about it now :)

23 July, 2016 05:30AM

hackergotchi for Francois Marier

Francois Marier

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

23 July, 2016 05:00AM

Mateus Bellomo

Send/receive text messages to buddies

Some weeks ago I’ve implemented the option to send a text message from telepathy-resiprocate Empathy. At that time I just implemented it at apps/telepathy/TextChannel class which wasn’t the ideal. Now, with a better understand of the resip/recon and resip/dum APIs, I was able to move this implementation there.

Besides that I also have implemented the option to receive a text message. For that I have done some changes at resip/recon/ConversationManager and resip/recon/UserAgent classes, along some other.

The complete changes could be seen at [1]. This branch also holds modifications related to send/receive presence. This is necessary since to send a message to a contact he/she should be online.

There is still work to be done specially checking the possible error cases but at least we could see a first prototype working. Follow some images:

textChannel_JitsiAccount logged in with Jitsi



textChannel_EmpathyAccount logged in with Empathy using telepathy-resiprocate

[1] https://github.com/resiprocate/resiprocate/compare/master…MateusBellomo:mateus-presence-text

23 July, 2016 04:00AM by mateusbellomo

July 22, 2016

hackergotchi for Norbert Preining

Norbert Preining

Yukio Mishima – The Temple of the Golden Pavilion

A masterpiece of modern Japanese literature: Yukio Mishima (三島由紀夫) The Temple of the Golden Pavilion (金閣寺). The fictional story about the very real arson attack that destroyed the Golden Pavilion in 1950.

A bit different treatise on beauty and ugliness!

How shall I put it? Beauty – yes, beauty is like a decayed tooth. It rubs against one’s tongue, it hangs there, hurting one, insisting on its own existence, finally […] the tooth extracted. Then, as one looks at the small, dirty, brown, blood-stained tooth lying in one’s hand, one’s thoughts are likely to be as follows: ‘Is this it?’

Mizoguchi, a stutterer, is from young years on taken by a near mystical imagination of the Golden Pavilion, influence by his father who considers it the most beautiful object in the world. After his father’s death he moves to Kyoto and becomes acolyte in the temple. He develops a friendship with Kashiwagi, who uses his clubfeet to make women feel sorry for him and make them fall in love with his clubfeet, as he puts it. Kashiwagi also puts Mizoguchi onto the first tracks of amorous experiences, but Mizoguchi invariably turns out to be impotent – but not due to his stuttering, but due to the image of the Golden Pavilion appearing in the essential moment and destroying every chance.

Yes, this was really the coast of the Sea of Japan! Here was the source of all my unhappiness, of all my gloomy thoughts, the origin of all my ugliness and all my strength. It was a wild sea.

Mizoguchi is getting more and more mentally about his relation with the head monk, neglects his studies, and after a stark reprimand he escapes to the north coast, from where he is brought back by police to the temple. He decides to burn down the Golden Pavilion, which has taken more and more command of his thinking and doing. He carries out the deed with the aim to burn himself in the top floor, but escapes in the last second to retreat into the hills to watch the spectacle.

Closely based on the true story of the arsonist of the Golden Pavilion, whom Mishima even visited in prison, the book is a treatise about beauty and ugly.

At his trial he [the real arsonist] said: “I hate myself, my evil, ugly, stammering self.” Yet he also said that he did not in any way regret having burned down the Kinkakuji.
Nancy Wilson Ross in the preface

Mishima is master in showing these two extremes by contrasting the refined qualities of Japanese culture – flower arrangement, playing the shakuhachi, … – with immediate outburst of contrasting behavior: cold and brutal recklessness. Take for example the scene were Kashiwagi is arranging flowers, stolen by Mizoguchi from the temple grounds, while Mizoguchi is playing the flute. They also discuss koans and various interpretations. Enters the Ikebana teacher, and mistress of Kashiwagi. She congratulates Kashiwagi to his excellent arrangement, which he answers coldly by quitting their relationship, both as teacher as well as mistress, and telling her not to see him again in a formal style. She, still ceremonially kneeling, suddenly destroys the flower arrangement, only to be beaten and thrown out by Kashiwagi. And the beauty and harmony has turned to ugliness and hate in seconds.

Beauty and Ugliness, two sides of the same medal, or inherently the same, because it is only up to the point of view. Mishima ingeniously plays with this duality, and leads us through the slow and painful development of Mizoguchi to the bitter end, which finally gives him freedom, freedom from the force of beauty. Sometimes seeing how our society is obsessed with beauty – I cannot get rid of the feeling that there are far more Mizoguchis at heart.

22 July, 2016 03:37PM by Norbert Preining

hackergotchi for Paul Tagliamonte

Paul Tagliamonte


I’ll be at HOPE 11 this year - if anyone else will be around, feel free to send me an email! I won’t have a phone on me (so texting only works if you use Signal!)

Looking forward for a chance to see everyone soon!

22 July, 2016 12:16PM

Russell Coker

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

 phase2="auth=CHAP password=PASS"

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

22 July, 2016 06:10AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.0.5

Version 0.0.5 of RcppCCTZ arrived on CRAN a couple of days ago. It reflects an upstream fixed made a few weeks ago. CRAN tests revealed that g++-6 was tripping over one missing #define; this was added upstream and I subsequently synchronized with upstream. At the same time the set of examples was extended (see below).

Somehow useR! 2016 got in the way and while working on the then-incomplete examples during the traveling I forgot to release this until CRAN reminded me that their tests still failed. I promptly prepared the 0.0.5 release but somehow failed to update NEWS files etc. They are correct in the repo but not in the shipped package. Oh well.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp.

Two good examples are now included, and shown here. The first one tabulates the time difference between New York and London (at a weekly level for compactness):

R> example(tzDiff)

tzDiffR> # simple call: difference now
tzDiffR> tzDiff("America/New_York", "Europe/London", Sys.time())
[1] 5

tzDiffR> # tabulate difference for every week of the year
tzDiffR> table(sapply(0:52, function(d) tzDiff("America/New_York", "Europe/London",
tzDiff+                                       as.POSIXct(as.Date("2016-01-01") + d*7))))

 4  5 
 3 50 

Because the two continents happen to spring forward and fall backwards between regular and daylight savings times, there are, respectively, two and one week periods where the difference is one hour less than usual.

A second example shifts the time to a different time zone:

R> example(toTz)

toTzR> toTz(Sys.time(), "America/New_York", "Europe/London")
[1] "2016-07-14 10:28:39.91740 CDT"

Note that because we return a POSIXct object, it is printed by R with the default (local) TZ attribute (for "America/Chicago" in my case). A more direct example asks what time it is in my time zone when it is midnight in Tokyo:

R> toTz(ISOdatetime(2016,7,15,0,0,0), "Japan", "America/Chicago")
[1] "2016-07-14 15:00:00 CDT"

More changes will come in 0.0.6 as soon as I find time to translate the nice time_tool (command-line) example into an R function.

Changes in this version are summarized here:

Changes in version 0.0.5 (2016-07-09)

  • New utility example functions toTz() and tzDiff

  • Synchronized with small upstream change for additional #ifdef for compiler differentiation

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 July, 2016 03:08AM

hackergotchi for Martin Michlmayr

Martin Michlmayr

Debian on Seagate Personal Cloud and Seagate NAS

The majority of NAS devices supported in Debian are based on Marvell's Kirkwood platform. This platform is quite dated now and can only run Debian's armel port.

Debian now supports the Seagate Personal Cloud and Seagate NAS devices. They are based on Marvell's Armada 370, a platform which can run Debian's armhf port. Unfortunately, even the Armada 370 is a bit dated now, so I would not recommend these devices for new purchases. If you have one already, however, you now have the option to run native Debian.

There are some features I like about the Seagate NAS devices:

  • Network console: you can connect to the boot loader via the network. This is useful to load Debian or to run recovery commands if needed.
  • Mainline support: the devices are supported in the mainline kernel.
  • Good contacts: Seagate engineer Simon Guinot is interested in Debian support and is a joy to work with. There's also a community for LaCie NAS devices (Seagate acquired LaCie).

If you have a Seagate Personal Cloud and Seagate NAS, you can follow the instructions on the Debian wiki.

If Seagate releases more NAS devices on Marvell's Armada platform, I intend to add Debian support.

22 July, 2016 02:50AM

July 21, 2016

Vincent Fourmond

QSoas version 2.0 is out / QSoas paper

I thought it would come before that, but I've finally gotten around releasing version 2.0 of my data analysis program, QSoas !

It provides significant improvements to the fit interface, in particular for multi-buffer fits, with a “Multi” fit engine that performs very well for large multibuffer fits, a spreadsheet editor for fit parameters, and more usability improvements. It also features the definition of fits with distribution of values of one of the fit parameter, and new built-in fits. In addition, QSoas version 2.0 features new commands to derive data, to flag buffers and handle large multi-column datasets, and improvements of existing commands. The full list of changes since version 1.0 can be found there.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too.

In addition, I am glad to announce that QSoas is now described in a recent publication, Fourmond, Anal. Chem., 2016, 88, 5050-5052. Please cite this publication if you used QSoas to process your data.

21 July, 2016 09:04PM by Vincent Fourmond (noreply@blogger.com)

Olivier Grégoire

Height week: create an API on library ring client (LRC)

At the beginning of the week, I didn’t really use the LRC to communicate with my client.
-The client calls an function in it to call my method who calls my program
-The daemon sends his signal connect to an Qslot in LRC. After that, I just send another signal connect to a lambda function of the client

I have never programmed API before and I began to write some code without checking how doing that. I needed to extract all the information of my map<s,s> sending by the daemon to present all it in my API. After observing the code, I saw LRC follow the kde library code policy. So, I change my architecture to follow the same policies . Basically, I needed to create a public and private header by using the D-Pointer. My private header contains my slot who is connect with the daemon and all private variable. My public header contains a signal connect to lambda function who indicates to the client when some information change and he need to refresh it. This header contains obviously all the getters too.

I have now a functional API.

Next week I will work on the gnome client to use this new API.

21 July, 2016 04:57PM

Reproducible builds folks

Reproducible builds: week 62 in Stretch cycle

What happened in the Reproducible Builds effort between June 26th and July 2nd 2016:

Read on to find out why we're lagging some weeks behind…!

GSoC and Outreachy updates

  • Ceridwen described using autopkgtest code to communicate with containers and how to test the container handling.

  • reprotest 0.1 has been accepted into Debian unstable, and any user reports, bug reports, feature requests, etc. would be appreciated. This is still an alpha release, and nothing is set in stone.

Toolchain fixes

  • Matthias Klose uploaded doxygen/1.8.11-3 to Debian unstable (closing #792201) with the upstream patch improving SOURCE_DATE_EPOCH support by using UTC as timezone when parsing the value. This was the last patch we were carrying in our repository, thus this upload obsoletes the version in our experimental repository.
  • cmake/3.5.2-2 was uploaded by Felix Geyer, which sorts file lists obtained with file(GLOB).
  • Dmitry Shachnev uploaded sphinx/1.4.4-2, which fixes a timezone related issue when SOURCE_DATE_EPOCH is set.

With the doxygen upload we are now down to only 2 modified packages in our repository: dpkg and rdfind.

Weekly reports delay and the future of statistics

To catch up with our backlog of weekly reports we have decided to skip some of the statistics for this week. We might publish them in a future report, or we might switch to a format where we summarize them more (and which we can create (even) more automatically), we'll see.

We are doing these weekly statistics because we believe it's appropriate and useful to credit people's work and make it more visible. What do you think? We would love to hear your thoughts on this matter! Do you read these statistics? Somewhat?

Actually, thanks to the power of notmuch, Holger came up with what you can see below, so what's missing for this week are the uploads fixing irreprodubilities. Which we really would like to show for the reasons stated above and because we really really need these uploads to happen ;-)

But then we also like to confirm the bugs are really gone, which (atm) requires manual checking, and to look for the words "reproducible" and "deterministic" (and spelling variations) in debian/changelogs of all uploads, to spot reproducible work not tracked via the BTS.

And we still need to catch up on the backlog of weekly reports.

Bugs submitted with reproducible usertags

It seems DebCamp in Cape Town was hugely successful and made some people get a lot of work done:

61 bugs have been filed with reproducible builds usertags and 60 of them had patches:

Package reviews

437 new reviews have been added (though most of them were just linking the bug, "only" 56 new issues in packages were found), an unknown number has been been updated and 60 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been found:

Weekly QA work

98 FTBFS bugs have been reported by Chris Lamb and Santiago Vila.

diffoscope development

strip-nondeterminism development

  • Chris Lamb made sure that .zhfst files are treated as ZIP files.


  • Mattia Rizzolo uploaded pbuilder/0.225.1~bpo8+1 to jessie-backports and it has been installed on all build nodes. As a consequence all armhf and i386 builds will be done with eatmydata; this will hopefully cut down the build time by a noticable factor.


This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ceridwen and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

21 July, 2016 01:13PM

hackergotchi for Chris Lamb

Chris Lamb

Python quirk: Signatures are evaluated at import time

Every Python programmer knows to avoid mutable default arguments:

def fn(mutable=[]):
    print mutable

$ python test.py
['elem', 'elem']

However, many are not clear that this is due to arguments being evaluated at import time, rather than the first time the function is evaluated.

This results in related quirks such as:

def never_called(error=1/0):
$ python test.py
Traceback (most recent call last):
  File "test.py", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero

... and an—implementation-specific—quirk caused by naive constant folding:

def never_called():
    99999999 ** 9999999
$ python test.py

I suspect that this can be used as denial-of-service vector.

21 July, 2016 11:07AM

July 20, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

20 July, 2016 05:48PM by Daniel.Pocock

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

For almost two months I found very little time to process requests to host free software on Hosted Weblate. Today the queue has been emptied, what means that you can find many new translations there.

To make it short, here is list of new projects:

PS: If you didn't receive reply for your hosting request today, it was probably lost, so don't hesitate to ask again.

Filed under: Debian English Weblate | 0 comments

20 July, 2016 05:00PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Debconf 16 and My Experience with Debian

It has been often said that you should continually try new things in life so that

a. Unlike the fish you do not mistake the pond to be the sea.

b. You see other people, other types and ways of living and being which you normally won’t in your day-to-day existence.

With both of those as mantras I decided to take a leap into the unknown. I was unsure both about the visa process as well as the travel bit as I was traveling to an unknown place and although I had done some research about the place I was unsure about the authenticity of whatever is/was shared on the web.

During the whole journey both to and fro, I couldn’t sleep a wink. The Doha airport is huge. There are 5 Concourses, A, B , C, D, E and around 30+ gates in each Concourse. The ambition of the small state is something to be reckoned with. Almost 95% of the blue workers in the entire airport were of Asian sub-continent. While the Qatari Rial is 19 times stronger to us, the workers I suspect are worse-off than people doing similar things back home. Add to that the sharia law, even for all the money in the world, I wouldn’t want to settle therein.

Anyways, during the journey, a small surprise awaited me, Ritesh Raj Saraff, a DD was also traveling to Debconf. We bumped into each other while going to see the Doha City, courtesy Al-Hamad International Airport. I would probably share a bit more about Doha and my experiences with the city in upcoming posts.

Cut to Cape Town, South Africa, we landed in the city half an hour after our scheduled time and then we sped along to University of Cape Town (UCT) which was to become our home for the next 13 odd days.

The first few days were a whirlwind as there were new people to meet, old people whom I knew only as an e-mail id or an IRC nickname turned out to be real people and you have to try to articulate yourself in English, which is not a native language of mine. During Debcamp I was fortunate to be able visit some of the places and the wiki page had a lot of places which I knew I wouldn’t be able to complete unless I had 15 days unlimited time and money to go around so didn’t even try.

I had gone with few goals in mind :-

a. Do some documentation of the event – In this I failed completely as just the walk from the venue to where the talks were energy-draining for me. Apart from that, you get swept in meeting new people and talking about one of million topics in Debian which interest you or the other person and while they are fulfilling, it is and was both physically and emotionally draining for me (in a good way). Bernelle (one of the organizers) had warned us of this phenomenon but you disregard it as you know you have a limited time-frame in which to meet and greet people and it is all a over-whelming experience.

b. Another goal was to meet my Indian brethren who had left the country around 60~100 years mostly as slaves of East India company – In this I was partially successful. I met a couple of beautiful ladies who had either a father or a mother who was Indian while the other was of African heritage. It seemed in them a yearning to know the culture but from what little they had, only Bollywood and Indian cuisine was what they could make of Indian culture. One of the girls, ummm… women to be more truer, shared a somewhat grim tale. She had both an African boyfriend as well as Indian boyfriend in her life and in both cases, she was rejected by the boy’s parents because she wasn’t pure enough. This was deja vu all over again as the same thing can be seen here happening in casteism so there wasn’t any advice I could give but just nod in empathy. What was sort of relevation was when their parents or grandparents came, the name and surnames were thrown off and the surname was just the place from where they belong. From the discussions it emerged that there were also lot of cases of forced conversions to Christianity during that era as well as temptations of a better life.

As shared, this goal succeeded partially, as I was actually interested in their parents or grand-parents to know the events that shaped the Indian diaspora over there. While the children know only of today, yester-years could only be known by those people who made the unwilling perilous journey to Africa. I had also wanted to know more about Gandhiji’s role in that era but alas, that part of history would have to wait for another day as I guess, both those goals would only have met had I visited Durban but that was not to be.

I had applied for one talk ‘My Experience with Debian’ and one workshop for Installation of Debian on systems. The ‘My Experience with Debian’ was aimed at newbies and I had thought of using show-and-tell to share the differences between proprietary Operating Systems and a FOSS distribution such as Debian. I was going to take simple things such as changelogs, apt-listbugs, real-time knowledge of updates and upgrades as well as /etc/apt/sources.list to share both the versatility of the Debian desktop and real improvements than what proprietary Operating Systems had to offer. But I found myself engaging with Debian Developers (DD’s) rather than the newbies so had to change the orientation and fundamentals of the talk on the fly. I knew or suspected rather that the old idea would not work as it would just be repeating to the choir. With that in the back of mind, and the idea that perhaps they would not be so aware of the politics and events which happened in India over the last couple of decades, I tried to share what little I was able to recollect what little I was able to remember about those times. Apart from that, I was also highly conscious that I had been given just the before lunch slot aka ‘You are in the way of my lunch’ slot. So I knew I had to speak my piece as quickly as possible being as clear as can be. Later, I did get feedback that I was fast and seeing it through couple of times, do agree that I could have done a better job. What’s done is done and the only thing I could do to salvage it a bit is to make a presentation which I am sharing as below.


Would be nice if somebody could come up with a lighter template for presentations. For reference the template I have taken it from is shared at https://wiki.debian.org/Presentations . Some pictures from the presentation.




You can find the video at http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/My_Experience_with_Debian.webm

This is by no means the end of the Debconf16 experience, but actually the starting. I hope to share more of my thoughts, ideas and get as much feedback from all the wonderful people I met during Debconf.

Filed under: Miscellenous Tagged: #Debconf16, Doha, My talk, Qatar

20 July, 2016 02:38PM by shirishag75

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2016 videos

I just published the videos from Solskogen 2016 on Youtube; you can find them all in this playlist. The are basically exactly what was being sent out on the live stream, frame for frame, except that the audio for the live shader compos has been remastered, and of course a lot of dead time has been cut out (the stream was sending over several days, but most of the time, only the information loop from the bigscreen).

YouTube doesn't really support the variable 50/60 Hz frame rate we've been using well as far as I can tell, but mostly it seems to go to some 60 Hz upconversion, which is okay enough, because the rest of your setup most likely isn't free-framerate anyway.

Solskogen is interesting in that we're trying to do a high-quality stream with essentially zero money allocated to it; where something like Debconf can use €2500 for renting and transporting equipment (granted, for two or three rooms and not our single stream), we're largely dependent on personal equipment as well as borrowing things here and there. (I think we borrowed stuff from more or less ten distinct places.) Furthermore, we're nowhere near the situation of “two cameras, a laptop, perhaps a few microphones”; not only do you expect to run full 1080p60 to the bigscreen and switch between that and information slides for each production, but an Amiga 500 doesn't really have an HDMI port, and Commodore 64 delivers an infamously broken 50.12 Hz signal that you really need to deal with carefully if you want it to not look like crap.

These two factors together lead to a rather eclectic setup; here, visualized beautifully from my ASCII art by ditaa:

Solskogen 2016 A/V setup diagram

Of course, for me, the really interesting part here is near the end of the chain, with Nageru, my live video mixer, doing the stream mixing and encoding. (There's also Cubemap, the video reflector, but honestly, I never worry about that anymore. Serving 150 simultaneous clients is just not something to write home about anymore; the only adjustment I would want to make would probably be some WebSockets support to be able to deal with iOS without having to use a secondary HLS stream.) Of course, to make things even more complicated, the live shader compo needs two different inputs (the two coders' laptops) live on the bigscreen, which was done with two video capture cards, text chroma-keyed on top from Chroma, and OBS, because the guy controlling the bigscreen has different preferences from me. I would take his screen in as a “dirty feed” and then put my own stuff around it, like this:

Solskogen 2016 shader compo screenshot

(Unfortunately, I forgot to take a screenshot of Nageru itself during this run.)

Solskogen was the first time I'd really used Nageru in production, and despite super-extensive testing, there's always something that can go wrong. And indeed there was: First of all, we discovered that the local Internet line was reduced from 30/10 to 5/0.5 (which is, frankly, unusable for streaming video), and after we'd half-way fixed that (we got it to 25/4 or so by prodding the ISP, of which we could reserve about 2 for video—demoscene content is really hard to encode, so I'd prefer a lot more)… Nageru started crashing.

It wasn't even crashes I understood anything of. Generally it seemed like the NVIDIA drivers were returning GL_OUT_OF_MEMORY on things like creating mipmaps; it's logical that they'd be allocating memory, but we had 6 GB of GPU memory and 16 GB of CPU memory, and lots of it was free. (The PC we used for encoding was much, much faster than what you need to run Nageru smoothly, so we had plenty of CPU power left to run x264 in, although you can of course always want more.) It seemed to be mostly related to zoom transitions, so I generally avoided those and ran that night's compos in a more static fashion.

It wasn't until later that night (or morning, if you will) that I actually understood the bug (through the godsend of the NVX_gpu_memory_info extension, which gave me enough information about the GPU memory state that I understood I wasn't leaking GPU memory at all); I had set Nageru to lock all of its memory used in RAM, so that it would never ever get swapped out and lose frames for that reason. I had set the limit for lockable RAM based on my test setup, with 4 GB of RAM, but this setup had much more RAM, a 1080p60 input (which uses more RAM, of course) and a second camera, all of which I hadn't been able to test before, since I simply didn't have the hardware available. So I wasn't hitting the available RAM, but I was hitting the amount of RAM that Linux was willing to lock into memory for me, and at that point, it'd rather return errors on memory allocations (including the allocations the driver needed to make for its texture memory backings) than to violate the “never swap“ contract.

Once I fixed this (by simply increasing the amount of lockable memory in limits.conf), everything was rock-stable, just like it should be, and I could turn my attention to the actual production. Often during compos, I don't really need the mixing power of Nageru (it just shows a single input, albeit scaled using high-quality Lanczos3 scaling on the GPU to get it down from 1080p60 to 720p60), but since entries come in using different sound levels (I wanted the stream to conform to EBU R128, which it generally did) and different platforms expect different audio work (e.g., you wouldn't put a compressor on an MP3 track that was already mastered, but we did that on e.g. SID tracks since they have nearly zero ability to control the overall volume), there was a fair bit of manual audio tweaking during some of the compos.

That, and of course, the live 50/60 Hz switches were a lot of fun: If an Amiga entry was coming up, we'd 1. fade to a camera, 2. fade in an overlay saying we were switching to 50 Hz so have patience, 3. set the camera as master clock (because the bigscreen's clock is going to go away soon), 4. change the scaler from 60 Hz to 50 Hz (takes two clicks and a bit of waiting), 5. change the scaler input in Nageru from 1080p60 to 1080p50, 6. steps 3,2,1 in reverse. Next time, I'll try to make that slightly smoother, especially as the lack of audio during the switch (it comes in on the bigscreen SDI feed) tended to confuse viewers.

So, well, that was a lot of fun, and it certainly validated that you can do a pretty complicated real-life stream with Nageru. I have a long list of small tweaks I want to make, though; nothing beats actual experience when it comes to improving processes. :-)

20 July, 2016 10:23AM

Daniel Stender

Theano in Debian: maintenance, BLAS and CUDA

I'm glad to announce that we have the current release of Theano (0.8.2) in Debian unstable now, it's on its way into the testing branch and the Debian derivatives, heading for Debian 9. The Debian package is maintained in behalf of the Debian Science Team.

We have a binary package with the modules in the Python 2.7 import path (python-theano), if you want or need to stick to that branch a little longer (as a matter of fact, in the current popcon stats it's the most installed package), and a package running on the default Python 3 version (python3-theano). The comprehensive documentation is available for offline usage in another binary package (theano-doc).

Although Theano builds its extensions on run time and therefore all binary packages contain the same code, the source package generates arch specific packages1 for the reason that the exhaustive test suite could run over all the architectures to detect if there are problems somewhere (#824116).

what's this?

In a nutshell, Theano is a computer algebra system (CAS) and expression compiler, which is implemented in Python as a library. It is named after a Classical Greek female mathematician and it's developed at the LISA lab (located at MILA, the Montreal Institute for Learning Algorithms) at the Université de Montréal.

Theano tightly integrates multi-dimensional arrays (N-dimensional, ND-array) from NumPy (numpy.ndarray), which are broadly used in Scientific Python for the representation of numeric data. It features a declarative Python based language with symbolic operations for the functional definition of mathematical expressions, which allows to create functions that compute values for them. Internally the expressions are represented as directed graphs with nodes for variables and operations. The internal compiler then optimizes those graphs for stability and speed and then generates high-performance native machine code to evaluate resp. compute these mathematical expressions2.

One of the main features of Theano is that it's capable to compute also on GPU processors (graphical processor unit), like on custom graphic cards (e.g. the developers are using a GeForce GTX Titan X for benchmarks). Today's GPUs became very powerful parallel floating point devices which can be employed also for scientific computations instead of 3D video games3. The acronym "GPGPU" (general purpose graphical processor unit) refers to special cards like NVIDIA's Tesla4, which could be used alike (more on that below). Thus, Theano is a high-performance number cruncher with an own computing engine which could be used for large-scale scientific computations.

If you haven't came across Theano as a Pythonistic professional mathematician, it's also one of the most prevalent frameworks for implementing deep learning applications (training multi-layered, "deep" artificial neural networks, DNN) around5, and has been developed with a focus on machine learning from the ground up. There are several higher level user interfaces build in the top of Theano (for DNN, Keras, Lasagne, Blocks, and others, or for Python probalistic programming, PyMC3). I'll seek for some of them also becoming available in Debian, too.

helper scripts

Both binary packages ship three convenience scripts, theano-cache, theano-test, and theano-nose. Instead of them being copied into /usr/bin, which would result into a binaries-have-conflict violation, the scripts are to be found in /usr/share/python-theano (python3-theano respectively), so that both module packages of Theano can be installed at the same time.

The scripts could be run directly from these folders, e.g. do $ python /usr/share/python-theano/theano-nose to achieve that. If you're going to heavy use them, you could add the directory of the flavour you prefer (Python 2 or Python 3) to the $PATH environment variable manually by either typing e.g. $ export PATH=/usr/share/python-theano:$PATH on the prompt, or save that line into ~/.bashrc.

Manpages aren't available for these little helper scripts6, but you could always get info on what they do and which arguments they accept by invoking them with the -h (for theano-nose) resp. help flag (for theano-cache).

running the tests

On some occasions you might want to run the testsuite of the installed library, like to check over if everything runs fine on your GPU hardware. There are two different ways to run the tests (anyway you need to have python{,3}-nose installed). One is, you could launch the test suite by doing $ python -c 'import theano; theano.test() (or the same with python3 to test the other flavour), that's the same what the helper script theano-test does. However, by doing it that way some particular tests might fail by raising errors also for the group of known failures.

Known failures are excluded from being errors if you run the tests by theano-nose, which is a wrapper around nosetests, so this might be always the better choice. You can run this convenience script with the option --theano on the installed library, or from the source package root, which you could pull by $ sudo apt-get source theano (there you have also the option to use bin/theano-nose). The script accept options for nosetests, so you might run it with -v to increase verbosity.

For the tests the configuration switch config.device must be set to cpu. This will also include the GPU tests when a proper accessible device is detected, so that's a little misleading in the sense of it doesn't mean "run everything on the CPU". You're on the safe side if you run it always like this: $ THEANO_FLAGS=device=cpu theano-nose, if you've set config.device to gpu in your ~/.theanorc.

Depending on the available hardware and the used BLAS implementation (see below) it could take quite a long time to run the whole test suite through, on the Core-i5 in my laptop that takes around an hour even excluded the GPU related tests (which perform pretty fast, though). Theano features a couple of switches to manipulate the default configuration for optimization and compilation. There is a rivalry between optimization and compilation costs against performance of the test suite, and it turned out the test suite performs a quicker with lesser graph optimization. There are two different switches available to control config.optimizer, the fast_run toggles maximal optimization, while fast_compile runs only a minimal set of graph optimization features. These settings are used by the general mode switches for config.mode, which is either FAST_RUN by default, or FAST_COMPILE. The default mode FAST_RUN (optimizer=fast_run, linker=cvm) needs around 72 minutes on my lower mid-level machine (on un-optimized BLAS). To set mode=FAST_COMPILE (optimizer=fast_compile, linker=py) brings some boost for the performance of the test suite because it runs the whole suite in 46 minutes. The downside of that is that C code compilation is disabled in this mode by using the linker py, and also the GPU related tests are not included. I've played around with using the optimizer fast_compile with some of the other linkers (c|py and cvm, and their versions without garbage collection) as alternative to FAST_COMPILE with minimal optimization but also machine code compilation incl. GPU testing. But to my experience, fast_compile without another than the linker py results in some new errors and failures of some tests on amd64, and this might the case also on other architectures, too.

By the way, another useful feature is DebugMode for config.mode, which verifies the correctness of all optimizations and compares the C to Python results. If you want to have detailed info on the configuration settings of Theano, do $ python -c 'import theano; print theano.config' | less, and check out the chapter config in the library documentation in the documentation.

cache maintenance

Theano isn't a JIT (just-in-time) compiler like Numba, which generates native machine code in the memory and executes it immediately, but it saves the generated native machine code into compiledirs. The reason for doing it that way is quite practical like the docs explain, the persistent cache on disk makes it possible to avoid generating code for the same operation, and to avoid compiling again when different operations generate the same code. The compiledirs by default are located within $(HOME)/.theano/.

After some time the folder becomes quite large, and might look something like this:

$ ls ~/.theano

If the used Python version changed like in this example you might to want to purge obsolete cache. For working with the cache resp. the compiledirs, the helper theano-cache comes in handy. If you invoke it without any arguments the current cache location is put out like ~/.theano/compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.12-64 (the script is run from /usr/share/python-theano). So, the compiledirs for the old Python versions in this example (11+ and 12rc1) can be removed to free the space they occupy.

All compiledirs resp. cache directories meaning the whole cache could be erased by $ theano-cache basecompiledir purge, the effect is the same as by performing $ rm -rf ~/.theano. You might want to do that e.g. if you're using different hardware, like when you got yourself another graphics card. Or habitual from time to time when the compiledirs fill up so much that it slows down processing with the harddisk being very busy all the time, if you don't have an SSD drive available. For example, the disk space of build chroots carrying (mainly) the tests completely compiled through on default Python 2 and Python 3 consumes around 1.3 GB (see here).

BLAS implementations

Theano needs a level 3 implementation of BLAS (Basic Linear Algebra Subprograms) for operations between vectors (one-dimensional mathematical objects) and matrices (two-dimensional objects) carried out on the CPU. NumPy is already build on BLAS and pulls the standard implementation (libblas3, soure package: lapack), but Theano links directly to it instead of using NumPy as intermediate layer to reduce the computational overhead. For this, Theano needs development headers and the binary packages pull libblas-dev by default, if any other development package of another BLAS implementation (like OpenBLAS or ATLAS) isn't already installed, or pulled with them (providing the virtual package libblas.so). The linker flags could be manipulated directly through the configuration switch config.blas.ldflags, which is by default set to -L/usr/lib -lblas -lblas. By the way, if you set it to an empty value, Theano falls back to using BLAS through NumPy, if you want to have that for some reason.

On Debian, there is a very convenient way to switch between BLAS implementations by the alternatives mechanism. If you have several alternative implementations installed at the same time, you can switch from one to another easily by just doing:

$ sudo update-alternatives --config libblas.so
There are 3 choices for the alternative libblas.so (providing /usr/lib/libblas.so).

  Selection    Path                                  Priority   Status
* 0            /usr/lib/openblas-base/libblas.so      40        auto mode
  1            /usr/lib/atlas-base/atlas/libblas.so   35        manual mode
  2            /usr/lib/libblas/libblas.so            10        manual mode
  3            /usr/lib/openblas-base/libblas.so      40        manual mode

Press <enter> to keep the current choice[*], or type selection number:

The implementations are performing differently on different hardware, so you might want to take the time to compare which one does it best on your processor (the other packages are libatlas-base-dev and libopenblas-dev), and choose that to optimize your system. If you want to squeeze out all which is in there for carrying out Theano's computations on the CPU, another option is to compile an optimized version of a BLAS library especially for your processor. I'm going to write another blog posting on this issue.

The binary packages of Theano ship the script check_blas.py to check over how well a BLAS implementation performs with it, and if everything works right. That script is located in the misc subfolder of the library, you could locate it by doing $ dpkg -L python-theano | grep check_blas (or for the package python3-theano accordingly), and run it with the Python interpreter. By default the scripts puts out a lot of info like a huge perfomance comparison reference table, the current setting of blas.ldflags, the compiledir, the setting of floatX, OS information, the GCC version, the current NumPy config towards BLAS, NumPy location and version, if Theano linked directly or has used the NumPy binding, and finally and most important, the execution time. If just the execution time for quick perfomance comparisons is needed this script could be invoked with -q.

Theano on CUDA

The function compiler of Theano works with alternative backends to carry out the computations, like the ones for graphics cards. Currently, there are two different backends for GPU processing available, one docks onto NVIDIA's CUDA (Compute Unified Device Architecture) technology7, and another one for libgpuarray, which is also developed by the Theano developers in parallel.

The libgpuarray library is an interesting alternative for Theano, it's a GPU tensor (multi-dimensional mathematical object) array written in C with Python bindings based on Cython, which has the advantage of running also on OpenCL8. OpenCL, unlike CUDA9, is full free software, vendor neutral and overcomes the limitation of the CUDA toolkit being only available for amd64 and the ppc64el port (see here). I've opened an ITP on libgpuarray and we'll see if and how this works out. Another reason for it would be great to have it available is that it looks like CUDA currently runs into problems with GCC 610. More on that, soon.

Here's a litle checklist for setting up your CUDA device so that you don't have to experience something like this:

$ THEANO_FLAGS=device=gpu,floatX=float32 python ./cat_dog_classifier.py 
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected)

hardware check

For running Theano on CUDA you need an NVIDIA graphics card which is capable of doing that. You can recheck if your device is supported by CUDA here. When the hardware isn't too old (CUDA support started with GeForce 8 and Quadro X series) or too strange I think it isn't working only in exceptional cases. You can check your model and if the device is present in the system on the bare hardware level by doing this:

$ lspci | grep -i nvidia
04:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940M] (rev a2)

If a line like this doesn't get returned, your device most probably is broken, or not properly connected (ouch). If rev ff appears at the end of the line that means the device is off meaning powered down. This might be happening if you have a laptop with Optimus graphics hardware, and the related drivers have switched off the unoccupied device to safe energy11.

kernel module

Running CUDA applications requires the proprietary NVIDIA driver kernel module to be loaded into the kernel and working.

If you haven't already installed it for another purpose, the NVIDIA driver and the CUDA toolkit are both in the non-free section of the Debian archive, which is not enabled by default. To get non-free packages you have to add non-free (and it's better to do so, also contrib) to your package source in /etc/apt/sources.list, which might then look like this:

deb http://httpredir.debian.org/debian/ testing main contrib non-free

After doing that, perform $ apt-cache update to update the package lists, and there you go with the non-free packages.

The headers of the running kernel are needed to compile modules, you can get them together with the NVIDIA kernel module package by running:

$ sudo apt-get install linux-headers-$(uname -r) nvidia-kernel-dkms build-essential

DKMS will then build the NVIDIA module for the kernel and does some other things on the system. When the installation has finished, it's generally advised to reboot the system completely.


If you have problems with the CUDA device, it's advised to verify if the following things concerning the NVIDIA driver resp. kernel module are in order:

blacklist nouveau

Check if the default Nouveau kernel module driver (which blocks the NVIDIA module) for some reason still gets loaded by doing $ lsmod | grep nouveau. If nothing gets returned, that's right. If it's still in the kernel, just add blacklist nouveau to /etc/modprobe.d/blacklist.conf, and update the booting ramdisk with § sudo update-initramfs -u afterwards. Then reboot once more, this shouldn't be the case then anymore.

rebuild kernel module

To fix it when the module haven't been properly compiled for some reason you could trigger a rebuild of the NVIDIA kernel module with $ sudo dpkg-reconfigure nvidia-kernel-dkms. When you're about to send your hardware in to repair because everything looks all right but the device just isn't working, that really could help (own experience).

After the rebuild of the module or modules (if you have a few kernel packages installed) has completed, you could recheck if the module really is available by running:

$ sudo modinfo nvidia-current
filename:       /lib/modules/4.4.0-1-amd64/updates/dkms/nvidia-current.ko
alias:          char-major-195-*
version:        352.79
supported:      external
license:        NVIDIA
alias:          pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias:          pci:v000010DEd*sv*sd*bc03sc02i00*
alias:          pci:v000010DEd*sv*sd*bc03sc00i00*
depends:        drm
vermagic:       4.4.0-1-amd64 SMP mod_unload modversions 
parm:           NVreg_Mobile:int

It should be something similiar to this when everything is all right.

reload kernel module

When there are problems with the GPU, maybe the kernel module isn't properly loaded. You could recheck if the module has been properly loaded by doing

$ lsmod | grep nvidia
nvidia_uvm             73728  0
nvidia               8540160  1 nvidia_uvm
drm                   356352  7 i915,drm_kms_helper,nvidia

The kernel module could be loaded resp. reloaded with $ sudo nvidia-modprobe (that tool is from the package nvidia-modprobe).

unsupported graphics card

Be sure that you graphics cards is supported by the current driver kernel module. If you have bought new hardware, that's quite possible to come out being a problem. You can get the version of the current NVIDIA driver with:

$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.79  Wed Jan 13 16:17:53 PST 2016
GCC version:  gcc version 5.3.1 20160528 (Debian 5.3.1-21)

Then, google the version number like nvidia 352.79, this should get you onto an official driver download page like this. There, check for what's to be found under "Supported Products".

I you're stuck with that there are two options, to wait until the driver in Debian got updated, or replace it with the latest driver package from NVIDIA. That's possible to do, but something more for experienced users.

occupied graphics card

The CUDA driver cannot work while the graphical interface is busy like by processing the graphical display of your X.Org server. Which kernel driver actually is used to process the desktop could be examined by this command:12

$ grep '(II).*([0-9]):' /var/log/Xorg.0.log
[    37.700] (II) intel(0): Using Kernel Mode Setting driver: i915, version 1.6.0 20150522
[    37.700] (II) intel(0): SNA compiled: xserver-xorg-video-intel 2:2.99.917-2 (Vincent Cheng <vcheng@debian.org>)
[    39.808] (II) intel(0): switch to mode 1920x1080@60.0 on eDP1 using pipe 0, position (0, 0), rotation normal, reflection none
[    39.810] (II) intel(0): Setting screen physical size to 508 x 285
[    67.576] (II) intel(0): EDID vendor "CMN", prod id 5941
[    67.576] (II) intel(0): Printing DDC gathered Modelines:
[    67.576] (II) intel(0): Modeline "1920x1080"x0.0  152.84  1920 1968 2000 2250  1080 1083 1088 1132 -hsync -vsync (67.9 kHz eP)

This example shows that the rendering of the desktop is performed by the graphical device of the Intel CPU, which is just like it's needed for running CUDA applications on your NVIDIA graphics card, if you don't have another one.


With the Debian package of the CUDA toolkit everything pretty much runs out of the box for Theano. Just install it with apt-get, and you're ready to go, the CUDA backend is the default one. Pycuda is also a suggested dependency of the binary packages, it could be pulled together with the CUDA toolkit.

The up-to-date CUDA release 7.5 is of course available, with that you have Maxwell architecture support so that you can run Theano on e.g. a GeForce GTX Titan X with 6,2 TFLOPS on single precision13 at an affordable price. CUDA 814 is around the corner with support for the new Pascal architecture15. Like the GeForce GTX 1080 high-end gaming graphics card already has 8,23 TFLOPS16. When it comes to professional GPGPU hardware like the Tesla P100 there is much more computational power available, scalable by multiplication of cores resp. cards up to genuine little supercomputers which fit on a desk, like the DGX-117. Theano can use multiple GPUs for calculations to work with high scaled hardware, I'll write another blog post on this issue.

Theano on the GPU

It's not difficult to run Theano on the GPU.

Only single precision floating point numbers (float32) are supported on the GPU, but that is sufficient for deep learning applications. Theano uses double precision floats (float64) by default, so you have to set the configuration variable config.floatX to float32, like written on above, either with the THEANO_FLAGS environment variable or better in your .theanorc file, if you're going to use the GPU a lot.

Switching to the GPU actually happens with the config.device configuration variable, which must be set to either gpu or gpu0, gpu1 etc., to choose a particular one if multiple devices are available.

Here's is a little test script, it's taken from the docs but slightly altered:

from __future__ import print_function
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
from six.moves import range

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print("Used the cpu")
    print("Used the gpu")

You can run that script either with python or python3 (there was a single test failure on the Python 3 package, so the Python 2 library might be a little more stable currently). For comparison, here's an example on how it perfoms on my hardware, one time on the CPU, one more time on the GPU:

$ THEANO_FLAGS=floatX=float32 python ./check1.py 
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
Looping 1000 times took 4.481719 seconds
Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813  2.29967761
Used the cpu

$ THEANO_FLAGS=floatX=float32,device=gpu python ./check1.py 
Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN not available)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 1.164906 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
Used the gpu

If you got a result like this you're ready to go with Theano on Debian, training computer vision classifiers or whatever you want to do with it. I'll write more on for what Theano could be used, soon.

  1. Some ports are disabled because they are currently not supported by Theano. There are NotImplementedErrors and other errors in the tests on the numpy.ndarray object being not aligned. The developers commented on that, see here. And on some ports the build flags -m32 resp. -m64 of Theano aren't supported by g++, the build flags can't be manipulated easily. 

  2. Theano Development Team: "Theano: a Python framework for fast computation of mathematical expressions

  3. Marc Couture: "Today's high-powered GPUs: strong for graphics and for maths". In: RTC magazine June 2015, pp. 22–25 

  4. Ogier Maitre: "Understanding NVIDIA GPGPU hardware". In: Tsutsui/Collet (eds.): Massively parallel evolutionary computation on GPGPUs. Berlin, Heidelberg: Springer 2013, pp. 15-34 

  5. Geoffrey French: "Deep learing tutorial: advanved techniques". PyData London 2016 presentation 

  6. Like the description of the Lintian tag binary-without-manpage says, that's not needed for them being in /usr/share

  7. Tom. R. Halfhill: "Parallel processing with CUDA: Nvidia's high-performance computing platform uses massive multithreading". In: Microprocessor Report January 28, 2008 

  8. Faber et.al: "Parallelwelten: GPU-Programmierung mit OpenCL". In: C't 26/2014, pp. 160-165 

  9. For comparison, see: Valentine Sinitsyn: "Feel the taste of GPU programming". In: Linux Voice February 2015, pp. 106-109 

  10. https://lists.debian.org/debian-devel/2016/07/msg00004.html 

  11. If Optimus (hybrid) graphics hardware is present (like commonly today on PC laptops), Debian launches the X-server on the graphics processing unit of the CPU, which is ideal for CUDA. The problem with Optimus actually is the graphics processing on the dedicated GPU. If you are using Bumblebee, the Python interpreter which you want to run Theano on has be to be started with the launcher primusrun, because Bumblebee powers the GPU down with the tool bbswitch every time it isn't used, and I think also the kernel module of the driver is dynamically loaded. 

  12. Thorsten Leemhuis: "Treiberreviere. Probleme mit Grafiktreibern für Linux lösen": In: C't Nr.2/2013, pp. 156-161 

  13. Martin Fischer: "4K-Rakete: Die schnellste Single-GPU-Grafikkarte der Welt". In C't 13/2015, pp. 60-61 

  14. http://www.heise.de/developer/meldung/Nvidia-CUDA-8-bringt-Optimierungen-fuer-die-Pascal-Architektur-3164254.html 

  15. Martin Fischer: "All In: Nvidia enthüllt die GPU-Architektur 'Pascal'". In: C't 9/2016, pp. 30-31 

  16. Martin Fischer: "Turbo-Pascal: High-End-Grafikkarte für Spieler: GeForce GTX 1080". In: C't 13/2016, pp. 100-103 

  17. http://www.golem.de/news/dgx-1-nvidias-supercomputerchen-mit-8x-tesla-p100-1604-120155.html 

20 July, 2016 05:13AM by Daniel Stender

July 19, 2016

hackergotchi for Michael Prokop

Michael Prokop

DebConf16 in Capetown/South Africa: Lessons learnt

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • sources.debian.net features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

19 July, 2016 08:48PM by mika

hackergotchi for Joey Hess

Joey Hess

Re: Debugging over email

Lars wrote about the remote debugging problem.

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this.

This is also something I've thought about on and off, that affects me most every day.

I've found that building the test suite into the program, such that users can run it at any time, is a great way to smoke out problems. If a user thinks they have problem A but the test suite explodes, or also turns up problems B C D, then I have much more than the user's problem report to go on. git annex test is a good example of this.

Asking users to provide a recipe to reproduce the bug is very helpful; I do it in the git-annex bug report template, and while not all users do, and users often provide a reproducion recipe that doesn't quite work, it's great in triage to be able to try a set of steps without thinking much and see if you can reproduce the bug. So I tend to look at such bug reports first, and solve them more quickly, which tends towards a virtuous cycle.

I've noticed that reams of debugging output, logs, test suite failures, etc can be useful once I'm well into tracking a problem down. But during triage, they make it harder to understand what the problem actually is. Information overload. Being able to reproduce the problem myself is far more valuable than this stuff.

I've noticed that once I am in a position to run some commands in the environment that has the problem, it seems to be much easier to solve it than when I'm trying to get the user to debug it remotely. This must be partly psychological?

Partly, I think that the feeling of being at a remove from the system, makes it harder to think of what to do. And then there are the times where the user pastes some output of running some commands and I mentally skip right over an important part of it. Because I didn't think to run one of the commands myself.

I wonder if it would be helpful to have a kind of ssh equivilant, where all commands get vetted by the remote user before being run on their system. (And the user can also see command output before it gets sent back, to NACK sending of personal information.) So, it looks and feels a lot like you're in a mosh session to the user's computer (which need not have a public IP or have an open ssh port at all), although one with a lot of lag and where rm -rf / doesn't go through.

19 July, 2016 04:57PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Debugging over email

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.

First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.

Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.

Third idea: in addition to better error messages, might provide diagnostics tools as well.

A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".

Do you have ideas? Mail me (liw@liw.fi) or tell me on identi.ca (@liw) or Twitter (@larswirzenius).

19 July, 2016 03:08PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.6: Rolling on

The sixth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.6 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, and the 0.12.5 release in May --- making it the tenth release at the steady bi-montly release frequency. Just like the previous release, this one is once again more of a refining maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. That said, some nice features (such as caching support for sourceCpp() and friends) were added.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 703 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by about fourty packages from the last release in May!

Similar to the previous releases, we have contributions from first-time committers. Artem Klevtsov made na_omit run faster on vectors without NA values. Otherwise, we had many contributions from "regulars" like Kirill Mueller, James "coatless" Balamuta and Dan Dillon as well as from fellow Rcpp Core contributors. Some noteworthy highlights are encoding and string fixes, generally more robust builds, a new iterator-based approach for vectorized programming, the aforementioned caching for sourceCpp(), and several documentation enhancements. More details are below.

Changes in Rcpp version 0.12.6 (2016-07-18)

  • Changes in Rcpp API:

    • The long long data type is used only if it is available, to avoid compiler warnings (Kirill Müller in #488).

    • The compiler is made aware that stop() never returns, to improve code path analysis (Kirill Müller in #487 addressing issue #486).

    • String replacement was corrected (Qiang in #479 following mailing list bug report by Masaki Tsuda)

    • Allow for UTF-8 encoding in error messages via RCPP_USING_UTF8_ERROR_STRING macro (Qin Wenfeng in #493)

    • The R function Rf_warningcall is now provided as well (as usual without leading Rf_) (#497 fixing #495)

  • Changes in Rcpp Sugar:

    • Const-ness of min and max functions has been corrected. (Dan Dillon in PR #478 fixing issue #477).

    • Ambiguities for matrix/vector and scalar operations have been fixed (Dan Dillon in PR #476 fixing issue #475).

    • New algorithm header using iterator-based approach for vectorized functions (Dan in PR #481 revisiting PR #428 and addressing issue #426, with futher work by Kirill in PR #488 and Nathan in #503 fixing issue #502).

    • The na_omit() function is now faster for vectors without NA values (Artem Klevtsov in PR #492)

  • Changes in Rcpp Attributes:

    • Add cacheDir argument to sourceCpp() to enable caching of shared libraries across R sessions (JJ in #504).

    • Code generation now deals correctly which packages containing a dot in their name (Qiang in #501 fixing #500).

  • Changes in Rcpp Documentation:

    • A section on default parameters was added to the Rcpp FAQ vignette (James Balamuta in #505 fixing #418).

    • The Rcpp-attributes vignette is now mentioned more prominently in question one of the Rcpp FAQ vignette.

    • The Rcpp Quick Reference vignette received a facelift with new sections on Rcpp attributes and plugins begin added. (James Balamuta in #509 fixing #484).

    • The bib file was updated with respect to the recent JSS publication for RProtoBuf.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 July, 2016 12:49PM

hackergotchi for Chris Lamb

Chris Lamb

Python quirk: os.stat's return type

import os
import stat

st = os.stat('/etc/fstab')

# __getitem__
x = st[stat.ST_MTIME]
print((x, type(x)))

# __getattr__
x = st.st_mtime
print((x, type(x)))
(1441565864, <class 'int'>)
(1441565864.3485234, <class 'float'>)

19 July, 2016 10:20AM

July 18, 2016

John Goerzen

Building a home firewall: review of pfsense

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

18 July, 2016 09:34PM by John Goerzen

Reproducible builds folks

Preparing for the second release of reprotest

Author: ceridwen

I now have working test environments set up for null (no container, build on the host system), schroot, and qemu. After fixing some bugs, null and qemu now pass all their tests!

schroot still has a permission error related to disorderfs. Since the same code works for null and qemu and for schroot when disorderfs is disabled, it's something specific to disorderfs and/or its combination with schroot. The following is debug output that shows ls for the build directory on the testbed before and after the mock build, and stat for both the build directory and the mock build artifact itself. The first control run, without disorderfs, succeeds:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/control/ ;\n python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LANG=en_US.UTF-8', 'HOME=/nonexistent/first-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'PYTHONHASHSEED=559200286', 'TZ=GMT+12']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 56h/86d Inode: 1351634     Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.105915342 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: fc01h/64513d    Inode: 40767795    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.089915352 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/control/artifact host /tmp/tmpw_mwks82/control_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/control/artifact is not already at destination /tmp/tmpw_mwks82/control_artifact, copying
test.py: DBG: got reply from testbed: ok

That last bit indicates that copy command for the build artifact from the testbed to a temporary directory on the host succeeded. This is the debug output from the second run, with disorderfs enabled:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/disorderfs/ ;\n umask 0002 ;\n linux64 --uname-2.6 python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LC_ALL=fr_CH.UTF-8', 'CAPTURE_ENVIRONMENT=i_capture_the_environment', 'HOME=/nonexistent/second-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/i_capture_the_path', 'LANG=fr_CH.UTF-8', 'PYTHONHASHSEED=559200286', 'TZ=GMT-14']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 58h/88d Inode: 1           Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.201915291 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: 58h/88d Inode: 7           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.185915299 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/disorderfs/artifact host /tmp/tmpw_mwks82/experiment_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact is not already at destination /tmp/tmpw_mwks82/experiment_artifact, copying
schroot: DBG: cleanup...
schroot: DBG: execute-timeout: schroot --run-session --quiet --directory=/ --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52 --user=root -- rm -rf -- /tmp/autopkgtest.5oMipL
rm: cannot remove '/tmp/autopkgtest.5oMipL/disorderfs': Device or resource busy
schroot: DBG: execute-timeout: schroot --quiet --end-session --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52
Unexpected error:
Traceback (most recent call last):
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 708, in mainloop
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 646, in command
    r = f(c, ce)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 584, in cmd_copyup
    copyupdown(c, ce, True)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 469, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 494, in copyupdown_internal
    copyup_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 408, in copyup_shareddir
    shutil.copy(tb, host)
  File "/usr/lib/python3.5/shutil.py", line 235, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.5/shutil.py", line 114, in copyfile
    with open(src, 'rb') as fsrc:
PermissionError: [Errno 13] Permission denied: '/var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact'

ls shows that the artifact is created in the right place. However, when reprotest tries to copy it from the testbed to the host, it gets a permission error. The traceback is coming from virt/schroot, and it's a Python open() call that's failing. Note that the permissions are wrong for the second run, but that's expected because my schroot is stable so the umask bug isn't fixed yet; and that the rm error from disorderfs not being unmounted early enough (see below). I expect to see the umask test fail, though, not a crash in every test where the build succeeds.

After a great deal of effort, I isolated the bug that was causing the process to hang not to my code or autopkgtest's code, but to CPython and contextlib. It's supposed to be fixed in CPython 3.5.3, but for now I've worked around the problem by monkey-patching the patch provided in the latter issue onto contextlib.

Here is my current to-do list:

  • Fix PyPi not installing the virt/ scripts correctly.

  • Move the disorderfs unmount into the shell script. (When the virt/ scripts encounter an error, they try to delete a temporary directory, which fails if disorderfs is mounted, so the script needs to unmount it before that happens.)

  • Find and fix the schroot/disorderfs permission error bug.

  • Convert my notes on setting up for the tests into something useful for users.

  • Write scripts to synch version numbers and documentation.

  • Fix the headers in the autopkgtest code to conform to reprotest style.

  • Add copyright information for the contextlib monkey-patch and the autopkgtest files I've changed.

  • Close #829113 as wontfix.

And here are the questions I'd like to resolve before the second release:

  • Is there any other documentation that's essential? Finishing the documentation will come later.

  • Should I release before finishing the rest of the variation? This will slow down the release of the first version with something resembling full fuctionality.

  • Do I need to write a chroot test now? Given the duplication with schroot, I'm unconvinced this is worthwhile.

18 July, 2016 03:51PM

July 17, 2016

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

KDEPIM ready to be more broadly tested

As was posted a couple of weeks ago, the latest version of KDEPIM has been uploaded to unstable.

All packages are now uploaded and built and we believe this version is ready to be more broadly tested.

If you run unstable but have refrained from installing the kdepim packages up to now, we would appreciate it if you go ahead and install them now, reporting any issues that you may find.

Given that this is a big update that includes quite a number of plugins and libraries, it's strongly recommended that you restart your KDE session after updating the packages.

Happy hacking,

The Debian Qt/KDE Team.

Note lun jul 18 08:58:53 ART 2016: Link fixed and s/KDE/KDEPIM/.

17 July, 2016 11:15PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

Iustin Pop

Energy bar restored!

So, I've been sick. Quite sick, as for the past ~2 weeks I wasn't able to bike, run, work or do much beside watch movies, look at photos and play some light games (ARPGs rule in this case, all you need to do is keep the left mouse button pressed).

It was supposed to be only a light viral infection, but it took longer to clear out than I expected, probably due to it happening right after my dental procedure (and possibly me wanting to restart exercise too soon, to fast). Not fun, it felt like the thing that refills your energy/mana bar in games broke. I simply didn't feel restored, despite sleeping a lot; 2-3 naps per day sound good as long as they are restorative, if they're not, sleeping is just a chore.

The funny thing is that recovery happened so slow, that when I finally had energy it took me by surprise. It was like “oh, wait, I can actually stand and walk without feeling dizzy! Wohoo!” As such, yesterday was a glorious Saturday ☺

I was therefore able to walk a bit outside the house this weekend and feel like having a normal cold, not like being under a “cursed: -4 vitality” spell. I expect the final symptoms to clear out soon, and that I can very slowly start doing some light exercise again. Not tomorrow, though…

In the meantime, I'm sharing a picture from earlier this year that I found while looking through my stash. Was walking in the forest in Pontresina on a beatiful sunny day, when a sudden gust of wind caused a lot of the snow on the trees to fly around and make it look a bit magical (photo is unprocessed beside conversion from raw to jpeg, this is how it was straight out of the camera):

Winter in the forest

Why a winter photo? Because that's exactly how cold I felt the previous weekend: 30°C outside, but I was going to the doctor in jeans and hoodie and cap, shivering…

17 July, 2016 09:32PM

Michael Stapelberg

mergebot: easily merging contributions

Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload.

I wondered how close we can bring Debian to a model where accepting a contribution is just a single click as well. In principle, I think it can be done.

To demonstrate the feasibility and collect some feedback, I wrote a program called mergebot. The first stage is done: mergebot can be used on your local machine as a command-line tool. You provide it with the source package and bug number which contains the patch in question, and it will do the rest:

midna ~ $ mergebot -source_package=wit -bug=#831331
2016/07/17 12:06:06 will work on package "wit", bug "831331"
2016/07/17 12:06:07 Skipping MIME part with invalid Content-Disposition header (mime: no media type)
2016/07/17 12:06:07 gbp clone --pristine-tar git+ssh://git.debian.org/git/collab-maint/wit.git /tmp/mergebot-743062986/repo
2016/07/17 12:06:09 git config push.default matching
2016/07/17 12:06:09 git config --add remote.origin.push +refs/heads/*:refs/heads/*
2016/07/17 12:06:09 git config --add remote.origin.push +refs/tags/*:refs/tags/*
2016/07/17 12:06:09 git config user.email stapelberg AT debian DOT org
2016/07/17 12:06:09 patch -p1 -i ../latest.patch
2016/07/17 12:06:09 git add .
2016/07/17 12:06:09 git commit -a --author Chris Lamb <lamby AT debian DOT org> --message Fix for “wit: please make the build reproducible” (Closes: #831331)
2016/07/17 12:06:09 gbp dch --release --git-author --commit
2016/07/17 12:06:09 gbp buildpackage --git-tag --git-export-dir=../export --git-builder=sbuild -v -As --dist=unstable
2016/07/17 12:07:16 Merge and build successful!
2016/07/17 12:07:16 Please introspect the resulting Debian package and git repository, then push and upload:
2016/07/17 12:07:16 cd "/tmp/mergebot-743062986"
2016/07/17 12:07:16 (cd repo && git push)
2016/07/17 12:07:16 (cd export && debsign *.changes && dput *.changes)

midna ~ $ cd /tmp/mergebot-743062986/repo
midna /tmp/mergebot-743062986/repo $ git log HEAD~2..
commit d983d242ee546b2249a866afe664bac002a06859
Author: Michael Stapelberg <stapelberg AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Update changelog for 2.31a-3 release

commit 5a327f5d66e924afc656ad71d3bfb242a9bd6ddc
Author: Chris Lamb <lamby AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Fix for “wit: please make the build reproducible” (Closes: #831331)
midna /tmp/mergebot-743062986/repo $ git push
Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.59 KiB | 0 bytes/s, done.
Total 11 (delta 6), reused 0 (delta 0)
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
To git+ssh://git.debian.org/git/collab-maint/wit.git
   650ee05..d983d24  master -> master
 * [new tag]         debian/2.31a-3 -> debian/2.31a-3
midna /tmp/mergebot-743062986/repo $ cd ../export
midna /tmp/mergebot-743062986/export $ debsign *.changes && dput *.changes
Uploading wit_2.31a-3.dsc
Uploading wit_2.31a-3.debian.tar.xz
Uploading wit_2.31a-3_amd64.deb
Uploading wit_2.31a-3_amd64.changes

Of course, this is not quite as convenient as clicking a “Merge” button yet. I have some ideas on how to make that happen, but I need to know whether people are interested before I spend more time on this.

Please see github.com/Debian/mergebot for more details, and please get in touch if you think this is worthwhile or would even like to help. Feedback is accepted in the GitHub issue tracker for mergebot or the project mailing list mergebot-discuss. Thanks!

17 July, 2016 12:00PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Switching from approx to apt-cacher-ng

After a long ~5 years (from 2011) journey with approx I finally wanted to switch to something new like apt-cacher-ng. And after a bit of changes I finally managed to get apt-cacher-ng into my work flow.

Bit of History

I should first give you a brief on how I started using approx. It all started in MiniDebconf 2011 which I organized at my Alma-mater. I met Jonas Smedegaard here and from him I learned about approx. Jonas has a bunch of machines at his home and he was active user of approx and he showed it to me while explaining the Boxer project. I was quite impressed with approx. Back then I was using a 230kbps slow INTERNET connection and I was also maintaining a couple of packages in Debian. Updating the pbuilder chroots was time consuming task for me as I had to download multiple times over slow net. And approx largely solved this problem and I started using it.

5 years fast forward I now have quite fast INTERNET with good FUP. (About 50GB a month), but I still tend to use approx which makes building packages quite faster. I also use couple of containers on my laptop which all use my laptop as approx cache.

Why switch?

So why change to apt-cacher-ng?. Approx is a simple tool, it runs mainly with inetd and sits between apt and the repository on INTERNET. Where as apt-cacher-ng provides a lot of features. Below are some listed from the apt-cacher-ng manual.

  • use of TLS/SSL repositories (may be possible with approx but I'm notsure how to do it)
  • Access control of who can access caching server
  • Integration with debdelta (I've not tried, approx also supports debdelta)
  • Avoiding use of apt-cacher-ng for some hosts
  • Avoiding caching of some file types
  • Partial mirroring for offline usage.
  • Selection of ipv4 or ipv6 for connections.

The biggest change I see is the speed difference between approx and apt-cacher-ng. I think this is mainly because apt-cacher-ng is threaded where as approx runs using inetd.

I do not want all features of apt-cacher-ng at the moment, but who knows in future I might need some features and hence I decided to switch to apt-cacher-ng over approx.


Transition from approx to apt-cacher-ng was smoother than I expected. There are 2 approaches you can use one is explicit routing another is transparent routing. I prefer transparent routing and I only had to change my /etc/apt/sources.list to use the actual repository URL.

deb http://deb.debian.org/debian unstable main contrib non-free
deb-src http://deb.debian.org/debian unstable main

deb http://deb.debian.org/debian experimental main contrib non-free
deb-src http://deb.debian.org/debian experimental main

After above change I had to add a 01proxy configuration file to /etc/apt/apt.conf.d/ with following content.

Acquire::http::Proxy "http://localhost:3142/"

I use explicit routing only when using apt-cacher-ng with pbuilder and debootstrap. Following snippet shows explicit routing through /etc/apt/sources.list.

deb http://localhost:3142/deb.debian.org/debian unstable main

Usage with pbuilder and friends

To use apt-cacher-ng with pbuilder you need to modify /etc/pbuilderrc to contain following line


Usage with debootstrap

To use apt-cacher-ng with debootstrap, pass MIRROR argument of debootstrap as http://localhost:3142/deb.debian.org/debian.


I've now completed full transition of my work flow to apt-cacher-ng and purged approx and its cache.

Though it works fine I feel that there will be 2 caches created when you use transparent and explicit proxy using localhost:3142 URL. I'm sure it is possible to configure this to avoid duplication, but I've not yet figured it. If you know how to fix this do let me know.


Jonas told me that its not 2 caches but 2 routing paths, one for transparent routing and another for explicit routing. So I guess there is nothing here to fix :-).

17 July, 2016 11:00AM by copyninja

hackergotchi for Neil Williams

Neil Williams

Deprecating dpkg-cross

Deprecating the dpkg-cross binary

After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself.

2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.

The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package.

Use cross-config to retain support for autotools and CMake cross-building configuration.

If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.

2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

17 July, 2016 10:30AM by Neil Williams

Valerie Young

Work after DebConf

First week after DebCamp and DebConf! Both were incredible — the debian project and it’s contributors never fail to impress and delight me. None the less it felt great to have a few quiet, peaceful days of uninterrupted programming.

Notes about last week:

1. Finished Mattia’s final suggestions for the conversion of the package set pages script to python.

Hopefully it will be deployed soon, awaiting final approval 🙂

2. Replace the bash code that produced the left navigation on the home page (and most other pages) with the mustache template the python scripts use.

Previously, html was constructed and spat out from both a python and shell script — now we have a single, DRY mustache template. (At the top of the bash function that produced the navigation html, you will find the comment: “this is really quite incomprehensible and should be killed, the solution is to write all html pages with python…”. Turns out the intermediate solution is to use templates 😉 )

3. Thought hard about navigation of the test website, and redesigned (by rearranging) links in the left hand navigation.

After code review, you will see these changes as well! Things to look forward to include:
– A link to the Debian dashboard on the top left of every page (except the package specific pages).
– The title of each page (except the package pages) stretches across the whole page (instead of being squashed into the top left).
– Hover text has been added to most links in the left navigation.
– Links in left navigation have been reordered, and headers added.

Once you see the changes, please let me know if you think anything is unintuitive or confusion, everything can be easily changed!

4. Cross suite and architecture navigation enabled for most pages.

For most pages, you will be one click away from seeing the same statistics for a different suite or architecture! Whoo!

Notes about next week:

Last week I got carried away imagining minor improvements that can be made to the test websites UI, and I now have a backlog of ideas I’d like to implement. I’ve begun editing the script that makes most of the pages with statistics or package list (for example, all packages with notes, or all recently tested packages) to use templates and contain a bit more descriptive text. I’d also like to do a some revamping of the package set pages I converted.

These addition UI changes will be my first tasks for the coming week — since they are fresh on my mind and I’m quite excited about them. The following week I’d like to get back to extensibility and database issues mentioned previously!

17 July, 2016 01:42AM by spectranaut

July 16, 2016

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

The Open Source License API

Around a year ago, I started hacking together a machine readable version of the OSI approved licenses list, and casually picking parts up until it was ready to launch. A few weeks ago, we officially announced the osi license api, which is now live at api.opensource.org.

I also took a whack at writing a few API bindings, in Python, Ruby, and using the models from the API implementation itself in Go. In the following few weeks, Clint wrote one in Haskell, Eriol wrote one in Rust, and Oliver wrote one in R.

The data is sourced from a repo on GitHub, the licenses repo under OpenSourceOrg. Pull Requests against that repo are wildly encouraged! Additional data ideas, cleanup or more hand collected data would be wonderful!

In the meantime, use-cases for using this API range from language package managers pulling OSI approval of a licence programatically to using a license identifier as defined in one dataset (SPDX, for exampele), and using that to find the identifer as it exists in another system (DEP5, Wikipedia, TL;DR Legal).

Patches are hugly welcome, as are bug reports or ideas! I'd also love more API wrappers for other languages!

16 July, 2016 07:30PM by Paul Tagliamonte

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 158.25 work hours have been dispatched among 11 paid contributors. Their reports are available:

DebConf 16 Presentation

If you want to know more about how the LTS project is organized, you can watch the presentation I gave during DebConf 16 in Cape Town.

Evolution of the situation

The number of sponsored hours increased a little bit at 135 hours per month thanks to 3 new sponsors (Laboratoire LEGI – UMR 5519 / CNRS, Quarantainenet BV, GNI MEDIA). Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file lists 38 packages awaiting an update.

Thanks to our sponsors

New sponsors are in bold.

16 July, 2016 06:31AM by Raphaël Hertzog

July 15, 2016

hackergotchi for Lars Wirzenius

Lars Wirzenius

Two-factor auth for local logins in Debian using U2F keys

Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.

I've recently gotten two U2F keys. U2F is a open standard for authentication using hardware tokens. It's probably mostly meant for website logins, but I wanted to have it for local logins on my laptop running Debian. (I also offer a line of stylish aluminium foil hats.)

Having two-factor authentication (2FA) for local logins improves security if you need to log in (or unlock a screen lock) in a public or potentially hostile place, such as a cafe, a train, or a meeting room at a client. If they have video cameras, they can film you typing your password, and get the password that way.

If you set up 2FA using a hardware token, your enemies will also need to lure you into a cave, where a dragon will use a precision flame to incinerate you in a way that leaves the U2F key intact, after which your enemies steal the key, log into your laptop and leak your cat GIF collection.

Looking up information for how to set this up, I found a blog post by Sean Brewer, for Ubuntu 14.04. That got me started. Here's what I understand:

  • PAM is the technology in Debian for handling authentication for logins and similar things. It has a plugin architecture.

  • Yubico (maker of Yubikeys) have written a PAM plugin for U2F. It is packaged in Debian as libpam-u2f. The package includes documentation in /usr/share/doc/libpam-u2f/README.gz.

  • By configuring PAM to use libpam-u2f, you can require both password and the hardware token for logging into your machine.

Here are the detailed steps for Debian stretch, with minute differences from those for Ubuntu 14.04. If you follow these, and lock yourself out of your system, it wasn't my fault, you can't blame me, and look, squirrels! Also not my fault if you don't wear sufficient protection against dragons.

  1. Install pamu2fcfg and libpam-u2f.
  2. As your normal user, mkdir ~/.config/Yubico. The list of allowed U2F keys will be put there.
  3. Insert your U2F key and run pamu2fcfg -u$USER > ~/.config/Yubico/u2f_keys, and press the button on your U2F key when the key is blinking.
  4. Edit /etc/pam.d/common-auth and append the line auth required pam_u2f.so cue.
  5. Reboot (or at least log out and back in again).
  6. Log in, type in your password, and when prompted and the U2F key is blinking, press its button to complete the login.

pamu2fcfg reads the hardware token and writes out its identifying data in a form that the PAM module understands; see the pam-u2f documentation for details. The data can be stored in the user's home directory (my preference) or in /etc/u2f_mappings.

Once this is set up, anything that uses PAM for local authentication (console login, GUI login, sudo, desktop screen lock) will need to use the U2F key as well. ssh logins won't.

Next, add a second key to your u2f_keys. This is important, because if you lose your first key, or it's damaged, you'll otherwise have no way to log in.

  1. Insert your second U2F key and run pamu2fcfg -n > second, and press the second key's button when prompted.
  2. Edit ~/.config/Yubico/u2f_keys and append the output of second to the line with your username.
  3. Verify that you can log in using your second key as well as the first key. Note that you should have only one of the keys plugged in at the same time when logging in: the PAM module wants the first key it finds so you can't test both keys plugged in at once.

This is not too difficult, but rather fiddly, and it'd be nice if someone wrote at least a way to manage the list of U2F keys in a nicer way.

15 July, 2016 11:19AM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Fully SSL for my website

I finally made full switch to SSL for my website. Thanks to this simple howto on Let's Encrypt. I had to use the upstream git repo though. The Debian packaged tool, letsencrypt.sh, did not have enough documentation/pointers in place. And finally, thanks to the Let's Encrypt project as a whole.

PS: http is now redirected to https. I hope nothing really breaks externally.




15 July, 2016 10:24AM by Ritesh Raj Sarraf

hackergotchi for Clint Adams

Clint Adams

July 14, 2016

Andrew Cater

Who wrote Hello world

Who wrote "Hello, world" ?
Rereading Kernighan and Ritchie's classic book on C - https://en.wikipedia.org/wiki/The_C_Programming_Language - almost the first thing you find is the listing for hello world. The comments make it clear that this is a commonplace - the sort of program that every programmer writes as a first test - the new computer works, the compiler / interpreter produces useful output and so on. It' s the classic, canonical thing to do.

A long time back, I got asked whether programming was an art or a science: it's both, but most of all it's only good insofar as it's shared and built on. I used hello world as an example: you can write hello world. You decide to add different text - a greeting (Hej! / ni hao / Bonjour tout le monde! )for friends. 

You discover at / cron / anacron - now you can schedule reminders "It's midnight - do you know where your code is?" "Go to bed, you have school tomorrow"

You can discover how to code for a graphical environment: how to build a test framework around it to check that it _only_ prints hello world and doesn't corrupt other memory ... the uses are endless if it sparks creativity.

If you feel like it, you can share your version - and learn from others. Write it in different languages - there's the analogous 99 bottles of beer site showing how to count and use different languages at www.99-bottles-of-beer.net

Not everyone will get it: not everyone will see it but everyone needs the opportunity 

Everyone needs the chance to share and make use of the commons, needs to be able to feel part of this 

Needs to be included: needs to feel that this is part of common heritage.
If you work for an employer: get them to contribute code / money / resources - even if it's as a charitable donation or to offset against taxes

If you work for a government: get them to use Free/Libre/Open Source products

If you work for a hosting company / ISP - get them to donate bandwidth for schools/coding clubs.

Give your time, effort, expertise to help: you gained from others, help others gain

If you work for an IT manufacturer - get them to think of FLOSS as the norm, not the exception



14 July, 2016 10:43PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Steve Kemp

Steve Kemp

Adding lua to all the things!

Recently Antirez made a post documenting a simple editor in 1k of pure C, the post was interesting in itself, and the editor is a cute toy because it doesn't use curses - instead using escape sequences.

The github project became very popular and much interesting discussion took place on hacker news.

My interest was piqued because I've obviously spent a few months working on my own console based program, and so I had to read the code, see what I could learn, and generally have some fun.

As expected Salvatore's code is refreshingly simple, neat in some areas, terse in others, but always a pleasure to read.

Also, as expected, a number of forks appeared adding various features. I figured I could do the same, so I did the obvious thing in adding Lua scripting support to the project. In my fork the core of the editor is mostly left alone, instead code was moved out of it into an external lua script.

The highlight of my lua code is this magic:

  -- Keymap of bound keys
  local keymap = {}

  --  Default bindings
  keymap['^A']        = sol
  keymap['^D']        = function() insert( os.date() ) end
  keymap['^E']        = eol
  keymap['^H']        = delete
  keymap['^L']        = eval
  keymap['^M']        = function() insert("\n") end

I wrote a function invoked on every key-press, and use that to lookup key-bindings. By adding a bunch of primitives to export/manipulate the core of the editor from Lua I simplified the editor's core logic, and allowed interesting facilities:

  • Interactive evaluation of lua.
  • The ability to remap keys on the fly.
  • The ability to insert command output into the buffer.
  • The implementation of copy/past entirely in Lua_.

All in all I had fun, and I continue to think a Lua-scripted editor would be a neat project - I'm just not sure there's a "market" for another editor.

View my fork here, and see the sample kilo.lua config file.

14 July, 2016 08:57PM

hackergotchi for Sune Vuorela

Sune Vuorela

Leaky lambdas and self referencing shared pointers

After a bit of a debugging session, I ended up looking at some code in a large project

m_foo = std::make_shared<SomeQObject>();
/* plenty of lines and function boundaries left out */ 
(void)connect(m_foo.get(), &SomeQObject::someSignal, [m_foo]() {
  /* */

The connection gets removed when the pointer inside m_foo gets de-allocated by the shared_ptr.
But the connection target is a lambda that has captured a copy of the shared_ptr…

There is at least a couple of solutions.

  • Keep the connection object (QMetaObject::Connection) around and call disconnect in your destructor. That way the connection gets removed and the lamda object should get removed
  • Capture the shared pointer by (const) reference. Capture the shared pointer as a weak pointer. Or as a raw pointer. All of this is safe because whenever the shared pointer gets a refcount of zero, the connection gets taken down with the object.

I guess the lesson learnt is be careful when capturing shared pointers.

14 July, 2016 08:27PM by Sune Vuorela

hackergotchi for Norbert Preining

Norbert Preining

Osamu Dazai – No Longer Human

Japanese authors have a tendency to commit suicide, it seems. I have read Ryunosuke Akutagawa (芥川 龍之介, at 35), Yukio Mishima (三島 由紀夫, at 45), and also Osamu Dazai (太宰 治, at 39). Their end often reflects in their writings, and one of these examples is the book I just finished, No Longer Human.


Considered as Dazai’s master piece, and with Soseki’s Kokoro the best selling novels in Japan. The book recounts the life of Oba Yozo, from childhood to the end in a mental hospital. The early years, described in the first chapter (“Memorandum”), are filled with the feeling of differentness, alienation from the rest, and Oba starts his way of living by playing the clown, permanently making jokes. The Second Memorandom spans the time to university, where he drops out, tries to become a painter, indulges in alcohol, smoking and prostitutes, leading to a suicide attempt together with a married woman, but he survived. The first part of the Third Memorandom sees a short recovering due to his relationship with a woman. He stops drinking and works as cartoonist, but in the last part his drinking pal from university times shows up again and they return into an ever increasing vicious drinking. Eventually he is separated from his wife, and confined to a mental hospital.

Very depressing to read, but written in a way that one cannot stop reading. The disturbing thing about this book is that, although the main actor conceives many bad actions, we feel somehow attached to him and feel pity for him. It is somehow a exercise how circumstances and small predispositions can make a huge change in our lives. And it warns us that each one of us can easily come to this brink.

14 July, 2016 04:42AM by Norbert Preining

July 13, 2016

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Cubemap 1.3.0 released

I just released version 1.3.0 of Cubemap, my high-performance video reflector. For a change, both new features are from (indirect) user requests; someone wanted support for raw TS inputs and it was easy enough to add.

And then I heard a rumor that people had found Cubemap useless because “it was logging so much”. Namely, if you have a stream that's down, Cubemap will connect to it every 200 ms, and log two lines for every failed connection attempt. Now, why people discard software on ~50 MB/day of logs (more like 50 kB/day after compression) on a broken setup (if you have a stream that's not working, why not just remove it from the config file and reload?) instead of just asking the author is beyond me, but hey, eventually it reached my ears, and after a grand half hour of programming, there's rate-limiting of logging failed connection attempts. :-)

The new version hasn't hit Debian unstable yet, but I'm sure it will very soon.

13 July, 2016 11:54PM

Niels Thykier

Selecting key packages via UDD

Thanks to Lucas Nussbaum, we now have a UDD script to filter/select key packages. Some example use cases:

Which key packages used compat 4?

# Data file compat-4-packages (one *source* package per line)
$ curl --silent --data-binary @compat-4-packages \

Also useful for things like bug#830997, which was my excuse for requesting this.:)

Is package foo a key package (yet)?

$ is-key-pkg() { 
 RES=$(echo "$1" | curl --silent --data-binary @- \
 if [ "$RES" ]; then
   echo yes
   echo no
$ is-key-pkg bash
$ is-key-pkg mscgen
$ is-key-pkg NotAPackage


Above shell snippets might need tweaking for better error handling, etc.

Once again, thanks to Lucas for the server-side UDD script.:)

Filed under: Debian

13 July, 2016 08:36PM by Niels Thykier

Dominique Dumont

A survey for developers about application configuration


Markus Raab, the author of Elektra project, has created a survey to get FLOSS developer’s point of view on the configuration of application.

If you are a developer, please fill this survey to help Markus’ work on improving application configuration management. Feeling this survey should take about 15 mns.

Note that the survey will close on July 18th.

The fact that this blog comes 1 month after the beginning of the survey is entirely my fault. Sorry about that…

All the best

Tagged: configuration, Perl

13 July, 2016 05:47PM by dod

Reproducible builds folks

Reprotest containers are (probably) working, finally

Author: ceridwen

After testing and looking at the code for Plumbum, I decided that it wouldn't work for my purposes. When a command is created by something like remote['ls'], it actually looks up which ls executable will be run and uses in the command object an internal representation of a path like /bin/ls. To make it work with autopkgtest's code would have required writing some kind of middle layer that would take all of the Plumbum code that makes subprocess calls, does path lookups, or uses the Python standard library to access OS functions and convert them into shell scripts. Another similar library, sh, has the same problems. I think there's a strong argument that something like Plumbum's or sh's API would be much easier to work with than adt_testbed, and I may take steps to improve it at some point, but for the moment I've focused on getting reprotest working with the existing autopkgtest/adt_testbed API.

To do this, I created a minimalistic shell AST library in _shell_ast.py using the POSIX shell grammar. I omitted a lot of functionality that wasn't immediately relevant for reprotest and simplified the AST in some places to make it easier to work. Using this, it generates a shell script and runs it with adt_testbed.Testbed.execute(). With these pieces in place, the tests succeed for both null and schroot! I haven't tested the rest of the containers, but barring further issues with autopkgtest, I expect they should work.

At this point, my goal is to push the next release out by next week, though as usual it will depend on how many snags I hit in the process. I see the following as the remaining blockers:

  • Test chroot and qemu in addition to null and schroot.

  • PyPi still doesn't install the scripts in virt/ properly.

  • While I fixed part of adt_testbed's error handling, some build failures still cause it to hang, and I have to kill the process by hand.

  • Better user documentation. I don't have time to be thorough, but I can provide a few more pointers.

13 July, 2016 03:49PM

July 12, 2016

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Cisco WLC SNMP password reset

If you have a Cisco wireless controller whose admin password you don't know, and you don't have the right serial cable, you can still reset it over SNMP if you forgot to disable the default read/write community:

snmpset -Os -v 2c -c private s foobarbaz

Thought you'd like to know. :-P

(There are other SNMP-based variants out there that rely on the CISCO-CONFIG-COPY-MIB, but older versions of the WLc software doesn't suppport it.)

12 July, 2016 10:50PM

Olivier Grégoire

Seventh week: create a new architecture and send a lot of information

At the begin of this week, I thought my architecture needed to be closer to the call . The goal was to create one class per call to save the different information. With this method, I only need to call the instance who I am interested and I can easly pull the information.
So, I began to rewrite my architecture on the daemon to create an instance of my class link directly with the callID.
After implementing it, I was really disappointed. This class was hardly to call from the upper software layers. Indeed, I didn’t know what is the current call display on my client.
I change my mind and I rewrite the architecture again. I observed the information I want to pull (frame rate, bandwidth, resolution…) and they are all generating in my daemon. Therefore, it will update every time something change in the client. So, I just need to catch it and send it to the upper software layers. My new architecture is simply a singleton because I just need one instance and I need to pull it from everywhere in my program.

Beyond that, I wanted to pull some information about the video (frame rate, resolution, codec for the local and remote computer). So, I looked to understand how the frame generate work. Now I can pull:
-Local and remote video codec
-Local and remote frame rate
-Remote audio codec
-Remote resolution


Next week, I will begin with working on creating an API on the client library. After that, I will continue to retrieve other information.

12 July, 2016 04:57PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.4, and new JSS paper

A new release 0.4.4 of RProtoBuf is now on CRAN, and corresponds to the source archive for the Journal of Statistical Software paper about RProtoBuf as JSS vol71 issue 02. The paper is also included as a pre-print in the updated package.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings small cleanups as well as build-system updates for the updated R 3.3.0 infrastructure based on g++ 4.9.*.

Changes in RProtoBuf version 0.4.4 (2016-07-10)

  • New vignette based on our brand-new JSS publication (v71 i02)

  • Some documentation enhancements were made, as well as other minor cleanups to file modes and operations

  • Unit-test vignette no longer writes to /tmp per CRAN request

  • The new Windows toolchain (based on g++ 4.9.*) is supported

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 July, 2016 01:43AM

Mateus Bellomo

Get presence from buddies


Now it’s also implemented the functionality that allows a user to see his contacts presence. At first week I’ve implemented only the Telepathy part of this methods and back then I didn’t comprehend that this information would come in the form of a NOTIFY SIP message. I also needed to use the SUBSCRIPTION mechanism properly so the presence server could send me the NOTIFY message.

To be able to create those messages was necessary a better understanding of resip/stack, resip/recon and resip/dum API’s.  Not that I master this libraries now, but at least I’m not totally lost anymore =)

Looking into this libraries I could see how much work was done by all of resiprocate contributors (and I imagine I don’t even saw the tip of the iceberg). There is so many features ready for use that now I think twice before start implementing something.

Since I didn’t find any reference explicitly showing the contact’s status in RFC3863 [1], I got this information by changing a contact presence (in a different machine logged in Jitsi [2]) and looking into the NOTIFY message received at resiprocate.

Follow some images about contact’s presence at Empathy:


status_OnlineOnline status



status_OfflineOffline status
status_Busy (DND)Busy – DND status
status_AwayAway status
status_In a meetingIn a meeting status
status_UnknownUnknow status

[1] https://tools.ietf.org/html/rfc3863

[2] https://jitsi.org/

12 July, 2016 01:19AM by mateusbellomo

hackergotchi for Joey Hess

Joey Hess

twenty years of free software -- part 13 past and future

This series has focused on new projects. I could have said more about significant work that didn't involve starting new projects. A big example was when I added dh to debhelper, which led to changes in a large percentage of debian/rules files. And I've contributed to many other free software projects.

I guess I've focused on new projects becuase it's easier to remember things I've started myself. And because a new project is more wide open, with more scope for interesting ideas, especially when it's free software being created just because I want to, with no expectations of success.

But starting lots of your own projects also makes you responsible for maintaining a lot of stuff. Ten years ago I had dozens of projects that I'd started and was still maintaining. Over time I started pulling away from Debian, with active projects increasingly not connected with it. By the end, I'd left and stopped working on the old projects. Nearly everything from my first decade in free software was passed on to new maintainers. It's good to get out from under old projects and concentrate on new things.

I saved propellor for last in this series, because I think it may point toward the future of my software. While git-annex was the project that taught me Haskell, propellor's design is much more deeply influenced by the Haskell viewpoint.

Absorbing that viewpoint has itself been a big undertaking for me this decade. It's like I was coasting along feeling at the top of my game and wham got hit by the type theory bus. And now I see that I was stuck in a rut before, and begin to get a feeling of many new possibilities.

That's a good feeling to have, twenty years in.

12 July, 2016 12:07AM