April 25, 2017

hackergotchi for Keith Packard

Keith Packard

TeleMini3

TeleMini V3.0 Dual-deploy altimeter with telemetry now available

TeleMini v3.0 is an update to our original TeleMini v1.0 flight computer. It is a miniature (1/2 inch by 1.7 inch) dual-deploy flight computer with data logging and radio telemetry. Small enough to fit comfortably in an 18mm tube, this powerful package does everything you need on a single board:

  • 512kB on-board data logging memory, compared with 5kB in v1.

  • 40mW, 70cm ham-band digital transceiver for in-flight telemetry and on-the-ground configuration, compared to 10mW in v1.

  • Transmitted telemetry includes altitude, speed, acceleration, flight state, igniter continutity, temperature and battery voltage. Monitor the state of the rocket before, during and after flight.

  • Radio direction finding beacon transmitted during and after flight. This beacon can be received with a regular 70cm Amateur radio receiver.

  • Barometer accurate to 100k' MSL. Reliable apogee detection, independent of flight path. Barometric data recorded on-board during flight. The v1 boards could only fly to 45k'.

  • Dual-deploy with adjustable apogee delay and main altitude. Fires standard e-matches and Q2G2 igniters.

  • 0.5” x 1.7”. Fits easily in an 18mm tube. This is slightly longer than the v1 boards to provide room for two extra mounting holes past the pyro screw terminals.

  • Uses rechargeable Lithium Polymer battery technology. All-day power in a small and light-weight package.

  • Learn more at http://www.altusmetrum.org/TeleMini/

  • Purchase these at http://shop.gag.com/home-page/telemini-v3.html

I don't have anything in these images to show just how tiny this board is—but the spacing between the screw terminals is 2.54mm (0.1in), and the whole board is only 13mm wide (1/2in).

This was a fun board to design. As you might guess from the version number, we made a couple prototypes of a version 2 using the same CC1111 SoC/radio part as version 1 but in the EasyMini form factor (0.8 by 1.5 inches). Feed back from existing users indicated that bigger wasn't better in this case, so we shelved that design.

With the availability of the STM32F042 ARM Cortex-M0 part in a 4mm square package, I was able to pack that, the higher power CC1200 radio part, a 512kB memory part and a beeper into the same space as the original TeleMini version 1 board. There is USB on the board, but it's only on some tiny holes, along with the cortex SWD debugging connection. I may make some kind of jig to gain access to that for configuration, data download and reprogramming.

For those interested in an even smaller option, you could remove the screw terminals and battery connector and directly wire to the board, and replace the beeper with a shorter version. You could even cut the rear mounting holes off to make the board shorter; there are no components in that part of the board.

25 April, 2017 04:01PM

hackergotchi for Daniel Pocock

Daniel Pocock

FSFE Fellowship Representative, OSCAL'17 and other upcoming events

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

25 April, 2017 12:57PM by Daniel.Pocock

Tim Retout

Packet.net arm64 servers

Packet.net offer an ARMv8 server with 96 cores for $0.50/hour. I signed up and tried building Libreoffice to see what would happen. Debian isn't officially supported there yet, but they offer Ubuntu, which suffices for testing the hardware.

Screenshot of htop showing one core in use and 95 idle.

Final build time: around 12 hours, compared to 2hr 55m on the official arm64 buildd.

Most of the Libreoffice build appeared to consist of "touch /some/file" repeated endlessly - I have a suspicion that the I/O performance might be low on this server (although I have no further evidence to offer for this). I think the next thing to try is building on a tmpfs, because the server has 128GB RAM available, and it's a shame not to use it.

25 April, 2017 12:38PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Removing git-lfs

Git is cool, for reasons I won't go into here.

It doesn't deal very well with very large files, but that's fine; when using things like git-annex or git-lfs, it's possible to deal with very large files.

But what if you've added a file to git-lfs which didn't need to be there? Let's say you installed git-lfs and told it to track all *.zip files, but then it turned out that some of those files were really small, and that the extra overhead of tracking them in git-lfs is causing a lot of grief with your users. What do you do now?

With git-annex, the solution is simple: you just run git annex unannex <filename>, and you're done. You may also need to tell the assistant to no longer automatically add that file to the annex, but beyond that, all is well.

With git-lfs, this works slightly differently. It's not much more complicated, but it's not documented in the man page. The naive way to do that would be to just run git lfs untrack; but when I tried that it didn't work. Instead, I found that the following does work:

  • First, edit your .gitattributes files, so that the file which was (erroneously) added to git-lfs is no longer matched against a git-lfs rule in your .gitattributes.
  • Next, run git rm --cached <file>. This will tell git to mark the file as removed from git, but crucially, will retain the file in the working directory. Dropping the --cached will cause you to lose those files, so don't do that.
  • Finally, run git add <file>. This will cause git to add the file to the index again; but as it's now no longer being smudged up by git-lfs, it will be committed as the normal file that it's supposed to be.

25 April, 2017 09:56AM

Reproducible builds folks

Reproducible Builds: week 104 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 16 and Saturday April 22 2017:

Upcoming events

  • On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds.

  • Between May 5th-7th the Reproducible Builds Hackathon 2017 will take place in Hamburg, Germany.

  • On May 13th Chris Lamb will give a talk at OSCAL'17 in Tirana, Albania on Reproducible Builds.

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Chris West:

Reviews of unreproducible packages

37 package reviews have been added, 64 have been updated and 16 have been removed in this week, adding to our knowledge about identified issues.

One issue type has been updated:

Two issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (2)

diffoscope development

Misc.

This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

25 April, 2017 07:38AM

April 24, 2017

Mike Gabriel

Making Debian experimental's X2Go Server Packages available on Ubuntu, Mint and alike

Often I get asked: How can I test the latest nx-libs packages [1] with a stable version of X2Go Server [2] on non-Debian, but Debian-like systems (e.g. Ubuntu, Mint, etc.)?

This is quite easy, if you are not scared of building binary Debian packages from Debian source packages. Until X2Go Server (and NXv3) will be made available in Debian unstable, the brave testers should follow the below installation recipe.

Step 1: Add Debian experimental as Source Package Source

Add Debian experimental as source package provider (and immediately install the Debian Archive Keyring package):

$ echo "deb-src http://httpredir.debian.org/debian experimental main" | sudo tee /etc/apt/sources.list.d/debian-experimental.list
$ sudo apt-get update
$ sudo apt-get install debian-archive-keyring
$ sudo apt-get update

Step 2: Obtain Build Tools and Build Dependencies

When building software, you need to have some extra packages. Those packages will not be needed at runtime of the built piece of software, so you may want to take some notes on what extra packages get installed with the below step. If you plan rebuilding X2Go Server and NXv3 several times, then simply leave the build dependencies installed:

$ sudo apt-get build-dep nx-libs
$ sudo apt-get build-dep x2goserver

Step 3: Build NXv3 and X2Go Server from Source

Building NXv3 (aka nx-libs) takes a while, so it may be time to get some coffee now... The build process should not run as superuser root. Stay with your normal user account.

$ mkdir Development/ && cd Development/
$ apt-get source -b nx-libs

[... enjoy your coffee, there'll be much output on your screen... ]

$ apt-get source -b x2goserver

In your working directoy, you should now find various new files ending with .deb.

Step 4: Install the built packages

These .deb files we will now install. It does not hurt to simply install all of them:

sudo dpkg -i *.deb

The above command might result in some error messages. Ignore them, you can easily fix them by installing the missing runtime dependencies:

sudo apt-get install -f

Play it again, Sam

If you want to re-do the above with some new nx-libs or x2goserver source package version, simply create an empty folder and repeat those steps above. The dpkg command will install the .deb files over the currently installed package versions and update your system with your latest build.

The disadvantage of this build-from-source approach is (it is a temporary recommendation until X2Go Server & co. have landed in Debian unstable), that you have to check for updates manually from time to time.

All X2Go compoments are packaged by the still quite fresh Debian Remote Packaging Maintainers team, you may want to visit the DDPO page of our team: https://qa.debian.org/developer.php?login=pkg-remote-team%40lists.alioth...

Recommended versions

For X2Go Server, the 4.0.1.x release series is considerably stable. The version shipped with Debian has been patched to work with the upcoming nx-libs 3.6.x series, but also tolerates the older 3.5.0.x series as shipped with X2Go's upstream packages.

For NXv3 (aka nx-libs) we recommend using (thus, waiting for) the 3.5.99.6 release. The package has been uploaded to Debian experimental already, but waits in Debian NEW for some minor ftp-master ACK (we added one binary package with the recent upload).

Updates

  • 2017-04-25: Information on how to install the debian-archive-keyring package before downloading any other packages from the Debian archive servers has been added.
  • 2017-04-25: Mention the Debian Remote Maintainers team and its DDPO page. Plus some grammar fixes.

References

24 April, 2017 02:48PM by sunweaver

[Arctica Project] Release of nx-libs (version 3.5.99.6)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Friday, Apr 21st 2017, version 3.5.99.6 of nx-libs has been released [1].

As some of you might have noticed, the release announcements for 3.5.99.4 and 3.5.99.5 have never been posted / written, so this announcement lists changes introduced since 3.5.99.3.

Credits

There are alway many people to thank, so I won't mention all here. The person I need to mention here is Mihai Moldovan, though. He virtually is our QA manager, although not officially entitled. The feedback he gives on code reviews is sooo awesome!!! May you be available to our project for a long time. Thanks a lot, Mihai!!!

Changes between 3.5.99.4 and 3.5.99.3

  • Use RPATH in nxagent now for finding libNX_X11 (our fake libX11 with nxcomp support added).
  • Drop support for various archaic platforms.
  • Fix valgrind issue in new Xinerama code.
  • Regression: Fix crash due to incompletely backported code in 3.5.99.3.
  • RPM packaging review by nx-libs's official Fedora maintainer (thanks, Orion!).
  • Update roll-tarball.sh script.

Changes between 3.5.99.5 and 3.5.99.4

  • Support building against libXfont2 API (using Xfont2, if available in build environment, otherwise falling back to Xfont(1) API)
  • Support built-in fonts, no requirement anymore to have the misc fonts installed.
  • Various Xserver os/ and dix/ backports from X.org.
  • ABI backports: CreatePixmap allocation hints, no index in CloseScreen() destructors propagated anymore, SetNotifyFd ABI, GetClientCmd et al. ABI.
  • Add quilt based patch system for bundled Mesa.
  • Fix upstream ChangeLog creation in roll-tarball.sh.
  • Keystroke.c code fully revisited by Ulrich Sibiller (Thanks!!!).
  • nxcomp now builds again on Cygwin. Thanks to Mike DePaulo for providing the patch!
  • Bump libNX_X11 to a status that resembles latest X.org libX11 HEAD. (Again, thanks Ulrich!!!).
  • Various changes to make valgrind more happy (esp. uninitialized memory issues).
  • Hard-code RGB color values, drop previously shipped rgb.txt config file.

Changes between 3.5.99.6 and 3.5.99.5

  • Regression fix for 3.5.99.5: Now fonts are display correctly again after session resumption.
  • Prefer source tree's nxcomp to system-wide installed nxcomp headers.
  • Provide nxagent specific auto-detection code for available display numbers (see nxagent man page for details, under -displayfd).
  • Man page updates for nxagent (thanks, Ulrich!).
  • Make sure that xkbcomp is available to nxagent (thanks, Mihai!).
  • Switch on building nxagent with MIT-SCREEN-SAVER extension.
  • Fix FTBFS on SPARC64, arm64 and m86k Linux platforms.
  • Avoid duplicate runs of 'make build' due to flaw in main Makefile.

Change Log

Lists of changes (since 3.5.99.3) can be obtained from here (3.5.99.3 -> .4), here (3.5.99.4 -> .5) and here (3.5.99.5 -> .6)

Known Issues

A list of known issues can be obtained from the nx-libs issue tracker [issues].

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used from remote sessions (via nxcomp compression library) or as a next Xserver.

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. At the moment, you can obtain nx-libs builds for Ubuntu 16.10 (yakkety) and 17.04 (zenial) as nightly builds.

References

24 April, 2017 02:12PM by sunweaver

hackergotchi for Norbert Preining

Norbert Preining

Leaving for BachoTeX 2017

Tomorrow we are leaving for TUG 2017 @ BachoTeX, one of the most unusual and great series of conferences (BachoTeX) merged with the most important series of TeX conference (TUG). I am looking forward to this trip and to see all the good friends there.

And having the chance to visit my family at the same time in Vienna makes this trip, how painful the long flight with our daughter will be, worth it.

See you in Vienna and Bachotek!

PS: No pun intended with the photo-logo combination, just a shot of the great surroundings in Bachotek 😉

24 April, 2017 02:20AM by Norbert Preining

April 23, 2017

Mark Brown

Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.

Bottom plate

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

Bottom plate with battery compartment visible

On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.

Top of drive

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

23 April, 2017 01:17PM by broonie

Andreas Metzler

balance sheet snowboarding season 2016/17

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down.

I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions.

I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School.

Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog).

Anyway, here is the balance-sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17
number of (partial) days251729373030252330241730
Damüls1010510162310429944
Diedamskopf154242313414191131223
Warth/Schröcken030413100213
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m132781101512245
# of runs309189503551462449516468597530354634

23 April, 2017 12:03PM by Andreas Metzler

April 22, 2017

Enrico Zini

Splitting a git-annex repository

I have a git annex repo for all my media that has grown to 57866 files and git operations are getting slow, especially on external spinning hard drives, so I decided to split it into separate repositories.

This is how I did it, with some help from #git-annex. Suppose the old big repo is at ~/oldrepo:

# Create a new repo for photos only
mkdir ~/photos
cd photos
git init
git annex init laptop

# Hardlink all the annexed data from the old repo
cp -rl ~/oldrepo/.git/annex/objects .git/annex/

# Regenerate the git annex metadata
git annex fsck --fast

# Also split the repo on the usb key
cd /media/usbkey
git clone ~/photos
cd photos
git annex init usbkey
cp -rl ../oldrepo/.git/annex/objects .git/annex/
git annex fsck --fast

# Connect the annexes as remotes of each other
git remote add laptop ~/photos
cd ~/photos
git remote add usbkey /media/usbkey

At this point, I went through all repos doing standard cleanup:

# Remove unneeded hard links
git annex unused
git annex dropunused --force 1-12345

# Sync
git annex sync

To make sure nothing is missing, I used git annex find --not --in=here to see if, for example, the usbkey that should have everything could be missing some thing.

Update: Antoine Beaupré pointed me to this tip about Repositories with large number of files which I will try next time one of my repositories grows enough to hit a performance issue.

22 April, 2017 06:48PM

Manuel A. Fernandez Montecelo

Debian GNU/Linux port for RISC-V 64-bit (riscv64)

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture.

If not interested in the story but you want to check the repository, just jump to the bottom.

Roots

A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) ─ about which I posted (by coincidence) exactly 2 years ago.

The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream ─ unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far).

But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story.

One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on.

For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory.

Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be.

Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding.

Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase.

My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test.

And lastly, another thing happened at the time...

Enter RISC-V

In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as Instruction Sets Should Be Free: The Case For RISC-V (pdf) and articles like RISC-V: An Open Standard for SoCs - The case for an open ISA in EE Times.

RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book “Computer Architecture: A Quantitative Approach”. Other very capable people are also also leading the project, doing the design and legwork to make it happen ─ see the list of contributors.

But, apart from throwing names, the project has many other merits.

Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for instruction widths of 32, 64 and even 128 bits; with a clean set of standard but optional extensions (atomics, float, double, quad, vector, ...); and with reserved space to add new extensions in ordered and compatible ways.

In the view of some people in the OpenRISC community, this unexpected development in a way made irrelevant the ongoing update of OpenRISC for 64-bits, and from what I know and for better or worse, all efforts on that front stopped.

Also interestingly (if nothing else, for my purposes of running as much a free system as possible), was that the people behind RISC-V had strong intentions to make it useful to create modern hardware, and were pushing heavily towards it from the beginning.

And together with this announcement or soonly after, it came the promise of free-ish hardware in the form of the lowRISC project. Although still today it seems to be a long way from actually shipping hardware, at least there was some prospect of getting it some day.

On top of all that, about the freedom aspect, both the Berkeley and lowRISC teams engaged since very early on with the free hardware community, including attending and presenting at OpenRISC events; and lowRISC intended to have as much free hardware as possible in their planned SoC.

Starting to work on the Debian port, this time for RISC-V

So in late 2014 I slowly started to prepare the Debian port, switching on and off.

The Userland spec was frozen in 2014 just before the announcement, the Privilege one is still not frozen today, so there was no need to rush.

There were plans to upstream the support in the toolchain for RISC-V (GNU binutils, glibc and GCC; Linux and other useful software like Qemu) in 2015, but sadly these did not materialise until late 2016 and 2017. One of the main reasons for the delay was due to the slowness to sort out the copyright assignment of the code to the FSF (again). Still today, only binutils and GCC are upstreamed, and Linux and glibc depend on the Privilege spec being finished, so it will take a while.

After the experience with OpenRISC and the support in GCC, I didn't want to invest too much time, lest it all became another dead-end due to lack of upstreaming ─ so I was just cross-compiling here and there, testing Qemu (which still today is very limited for this architecture, e.g. no network support and very limited character and block devices) and trying to find and report bugs in the implementations, and send patches (although I did not contribute much in the end).

Incompatible changes in the toolchain

In terms of compiling packages and building-up a repository, things were complicate, and less mature and stable than the OpenRISC ones were even back in 2014.

In theory, with the Userland spec being frozen, regular programs (below the Operating System level) compiled at any time could have run today; but in practice what happened is that there were several rounds of profound ─or, at least, disrupting─ changes in the toolchain before and while being upstreamed, which made the binary packages that I had built to not work at all (changes in dynamic loader, registers where arguments are stored when jumping functions, etc.).

These major breakages happened several times already, and kind of unexpectedly ─ at least for the people not heavily involved in the implementation.

When the different pieces are upstreamed it is expected that these breakages won't happen; but still there's at least the fundamental bit of glibc, which will probably change things once again in incompatible ways before or while being upstreamed.

Outside Debian but within the FOSS / Linux world, the main project that I know of is that some people from Fedora also started a port in mid 2016 and did great advances, but ─from what I know─ they put the project in the freezer in late 2016 until all such problems are resolved ─ they don't want to spend time rebootstrapping again and again.

What happened recently on the Debian front

In early 2016 I created the page for RISC-V in the Debian wiki, expecting that things were at last fully stable and the important bits of the toolchain upstreamed during that year ─ I was too optimistic.

Some other people (including Debian folks) have been contributing for a while, in the wiki, mailing lists and IRC channels, and in the RISC-V project mailing lists ─ you will see their names everywhere.

However, due to the combination of lack of hardware, software not upstreamed and shortcomings of emulators (chiefly Qemu) make contributions hard and very tedious, nothing much happened recently visible to the outside world in terms of software.

The private repository-in-the-making

In late 2015 and beginning of 2016, having some free time in my hands and expecting that all things would coalesce quickly, I started to build a repository of binary packages in a more systematic way, with most of the basic software that one can expect in a basic Debian system (including things common to all Linux systems, and also specific Debian software like dpkg or apt, and even aptitude!).

After that I also built many others outside the basic system (more than 1000 source packages and 2000 or 3000 arch-dependent binary packages in total), specially popular libraries (e.g. boost, gtk+ version 2 and 3), interpreters (several versions of lua, perl and python, also version 2 and 3) and in general packages that are needed to build many other packages (like doxygen). Unfortunately, some of these most interesting packages do not compile cleanly (more because of obscure or silly errors than proper porting), so they are not included at the moment.

I intentionally avoided trying to compile thousands of packages in the archive which would be of nobody's use at this point; but many more could be compiled without much effort.

About the how, initially I started cross-compiling and using rebootstrap, which was of great help in the beginning. Some of the packages that I cross-compiled had bugs that I did not know how to debug without a “live” and “native” (within emulators) system, so I tried to switch to “natively” built packages very early on. For that I needed many packages built natively (like doxygen or cmake) which would be unnecessary if I remained cross-compiling ─ the host tools would be used in that case.

But this also forced me to eat my own dog food, which even if much slower and tedious, it was on the whole a more instructive experience; and above all, it helped to test and make sure that the the tools and the whole stack was working well enough to build hundreds of packages.

Why the repository-in-the-making was private

Until now I did not attempt to make the repository available on-line, for several reasons.

First because it would be kind of useless to publish files that were not working or would soon not work, due to the incompatible changes in the toolchain, rendering many or most of the packages built useless. And because, for many months now, I expected that things would stabilise and to have something stable “really soon now” ─ but this didn't happen yet.

Second because of lack of resources and time since mid 2016, and because I got some packages only compiled thanks to (mostly small and unimportant, but undocumented and unsaved) hacks, often working around temporary bugs and thus not worth sending upstream; but I couldn't share the binaries without sharing the full source and fulfill licenses like the GNU GPL. I did a new round of clean rebuilds in the last few weeks, just finished, the result is close to 1000 arch-dependent packages.

And third, because of lack of demand. This changed in the last few weeks, when other people started to ask me to share the results even if incomplete or not working properly (I had one request in the past, but couldn't oblige in time at the time).

Finally, the work so far: repository now on-line

So finally, with the great help from Kurt Keville from MIT, and Bytemark sponsoring a machine where most of the packages were built, here we have the repository:

The lines for /etc/apt/sources.list are:

 deb [ arch=riscv64 signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main
 deb-src [ signed-by=/usr/share/keyrings/debian-keyring.gpg ] http://riscv.mit.edu/debian unstable main

The repository is signed with my key as Debian Developer, contained in the file /usr/share/keyrings/debian-keyring.gpg, which is part of the package debian-keyring (available from Debian and derivatives).

WARNING!!

This repository, though, is very much WIP, incomplete (some package dependencies cannot be fulfilled, and it's only a small percentage of the Debian archive, not trying to be comprehensive at the moment) and probably does not work at all in your system at this point, for the following reasons:

  • I did not create Debian packages for fundamental pieces like glibc, gcc, linux, etc.; although I hope that this happens soon after the next stable release (Stretch) is out of the door and the remaining pieces are upstreamed (help welcome).
     
  • There's no base system image yet, containing the software above plus a boot loaders and sequence to set up the system correctly. At the moment you have to provide your own or use one that you can find around the net. But hopefully we will provide an image soon. (Again, help welcome).
     
  • Due to ongoing changes in the toolchain, if the version of the toolchain in your image is very new or very old, chances are that the binaries won't work at all. The one that I used was the master branch on 20161117 of their fork of the toolchain: https://github.com/riscv/riscv-gnu-toolchain

The combination of all these shortcomings is specially unfortunate, because without glibc provided it will be difficult that you can get the binaries to run at all; but there are some packages that are arch-dependent but not too tied to libc or the dynamic loader will not be affected.

At least you can try one the few static packages present in Debian, like the one in the package bash-static. When one removes moving parts like the dynamic loader and libc, since the basic machine instructions are stable for several years now, it should work, but I wouldn't discard some dark magic that prevents even static binaries from working.

Still, I hope that the respository in its current state is useful to some people, at least for those who requested it. If one has the environment set-up, it's easy to unpack the contents of the .deb files and try out the software (which often is not trivial or very slow to compile, or needs lots of dependencies to be built first first).

... and finally, even if not useful at all for most people at the moment, by doing this I also hope that efforts like this spark your interest to contribute to free software, free hardware, or both! :-)

22 April, 2017 02:00AM by Manuel A. Fernandez Montecelo

April 21, 2017

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Indian Economy

This has gotten me finally ask the question

 

All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign.

Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go.

I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote.

Here's another snippet from Wikipedia on the same topic:

Agricultural Debt Waiver and Debt Relief Scheme[edit]

On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers

 

And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House

Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years.

The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013.

The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting.

The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes.

Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports point to a high dependence on indirect taxes in India and the current government has been trying to move away from that by increasing its reliance on direct taxes. Official data show that the dependence has come down from 5.93 percent in 2008-09 to 5.47 percent in 2015-16.

 

 

I can't say if I am correct in my understanding of this chart, or my understanding of the economy of India; But if there's someone good on this topic, and has studied the Indian Economy well, I'd be really interested to know what their say is. Because, otherwise, from my own interpretation on the subject, I don't see the day far away when this economy will plummet

 

PS: Image source Wikipedia https://upload.wikimedia.org/wikipedia/commons/2/2e/1951_to_2013_Trend_C...

Categories: 

Keywords: 

Like: 

21 April, 2017 06:33PM by Ritesh Raj Sarraf

hackergotchi for Joachim Breitner

Joachim Breitner

veggies: Haskell code generation from scratch

How hard it is to write a compiler for Haskell Core? Not too hard, actually!

I wish we had a formally verified compiler for Haskell, or at least for GHC’s intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection).

Introducing Veggies

So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC’s intermediate language Core to LLVM IR, with simplicity as the main design driving factor.

You can find the result in the GitHub repository of veggies (the name derives from “verifiable GHC”). If you clone that and run ./boot.sh some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run.

As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays – I implement these as need – and has support for FFI calls into C.

Why you don't want to use Veggies

Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including “unboxed” integer values and “unboxed” tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly.

Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places.

There is no support for concurrency.

Why it might be interesting to you nevertheless

So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons:

  • You always wondered how a compiler for Haskell actually works, and reading through a little over a thousands lines of code is less daunting than reading through the 34k lines of code that is GHC’s backend.
  • You have wacky ideas about Code generation for Haskell that you want to experiment with.
  • You have wacky ideas about Haskell that require special support in the backend, and want to prototype that.
  • You want to see how I use the GHC API to provide a ghc-like experience. (I copied GHC’s Main.hs and inserted a few hooks, an approach I copied from GHCJS).
  • You want to learn about running Haskell programs efficiently, and starting from veggies, you can implement all the trick of the trade yourself and enjoy observing the speed-ups you get.
  • You want to compile Haskell code to some weird platform that is supported by LLVM, but where you for some reason cannot run GHC’s runtime. (Because there are no threads and no garbage collection, the code generated by veggies does not require a runtime system.)
  • You want to formally verify Haskell code generation. Note that the code generator targets the same AST for LLVM IR that the vellvm2 project uses, so eventually, veggies can become a verified arrow in the top right corner map of the DeepSpec project.

So feel free to play around with veggies, and report any issues you have on the GitHub repository.

21 April, 2017 03:30PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Home

A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

21 April, 2017 08:01AM by Rhonda

Noah Meyerhans

Stretch images for Amazon EC2, round 2

Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include:

  • Don't install a local MTA.
  • Don't install busybox.
  • Ensure that /etc/machine-id is recreated at launch.
  • Fix the security.debian.org sources.list entry.
  • Enable Enhanced Networking and ENA support.
  • Images are owned by the official debian.org AWS account, rather than my personal account.

AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the cloud.debian.org BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.

21 April, 2017 04:37AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.6

Time for a new release of Rblpapi -- version 0.3.6 is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the seventh release since the package first appeared on CRAN last year. This release brings a very nice new function lookupSecurity() contributed by Kevin Jin as well as a number of small fixes and enhancements. Details below:

Changes in Rblpapi version 0.3.6 (2017-04-20)

  • bdh can now store in double preventing overflow (Whit and John in #205 closing #163)

  • bdp documentation has another ovveride example

  • A new function lookupSecurity can search for securities, optionally filtered by yellow key (Kevin Jin and Dirk in #216 and #217 closing #215)

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE (Dirk in #220)

  • getBars and getTicks can now return data.table objects (Dirk in #221)

  • bds has improved internal protect logic via Rcpp::Shield (Dirk in #222)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 April, 2017 01:36AM

April 20, 2017

RcppQuantuccia 0.0.1

New package! And, as it happens, a effectively a subset or variant of one my oldest packages, RQuantLib.

Fairly recently, Peter Caspers started to put together a header-only subset of QuantLib. He called this Quantuccia, and, upon me asking, said that it stands for "little sister" of QuantLib. Very nice.

One design goal is to keep Quantuccia header-only. This makes distribution and deployment much easier. In the fifteen years that we have worked with QuantLib by providing the R bindings via RQuantLib, it has always been a concern to provide current QuantLib libraries on all required operating systems. Many people helped over the years but it is still an issue, and e.g. right now we have no Windows package as there is no library build it against.

Enter RcppQuantuccia. It only depends on R, Rcpp (for seamless R and C++ integrations) and BH bringing Boost headers. This will make it much easier to have Windows and macOS binaries.

So what can it do right now? We started with calendaring, and you can compute date pertaining to different (ISDA and other) business day conventions, and compute holiday schedules. Here is one example computing inter alia under the NYSE holiday schedule common for US equity and futures markets:

R> library(RcppQuantuccia)
R> fromD <- as.Date("2017-01-01")
R> toD <- as.Date("2017-12-31")
R> getHolidays(fromD, toD)        # default calender ie TARGET
[1] "2017-04-14" "2017-04-17" "2017-05-01" "2017-12-25" "2017-12-26"
R> setCalendar("UnitedStates")
R> getHolidays(fromD, toD)        # US aka US::Settlement
[1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-05-29" "2017-07-04" "2017-09-04"
[7] "2017-10-09" "2017-11-10" "2017-11-23" "2017-12-25"
R> setCalendar("UnitedStates::NYSE")
R> getHolidays(fromD, toD)        # US New York Stock Exchange
[1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04"
[7] "2017-09-04" "2017-11-23" "2017-12-25"
R>

The GitHub repo already has a few more calendars, and more are expected. Help is of course welcome for both this, and for porting over actual quantitative finance calculations.

More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 April, 2017 01:38AM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2017 pretest started

Preparations for the release of TeX Live 2017 have started a few days ago with the freeze of updates in TeX Live 2016 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.

Notable changes are listed on the pretest page, I only want to report about the changes in the core infrastructure: changes in the user/sys mode of fmtutil and updmap, and introduction of the tlmgr shell.

User/sys mode of fmtutil and updmap

We (both at TeX Live and Debian) regularly got error reports about fonts not being found or formats not updated etc. The reason for all this was unmistakably the same: The user has called updmap or fmtutil without the -sys option, thus creating a copy of set of configuration files under his home directory, shadowing all later updates on the system side.

Reason for this behavior is the wide-spread misinformation (outdated information) on the internet suggesting to call updmap.

To counteract this, we have changed the behavior so that both updmap and fmtutil now accept a new argument -user (in addition to the already present -sys), and rejects to run when called without either of it given, giving a warning and linking to an explanation page. This page provides more detailed documentation, and best practice examples.

tlmgr shell

The TeX Live Manager got a new `shell’ mode, invoked by tlmgr shell. Details need to be flashed out, but in principle it is possible to use get and set to query and set some of the options normally passed via command lines, and use all the actions as defined in the documentation. The advantage of this is that it is not necessary to load the tlpdb for each invocation. Here a short example:

[~] tlmgr shell
protocol 1
tlmgr> load local
OK
tlmgr> load remote
tlmgr: package repository /home/norbert/public_html/tlpretest (verified)
OK
tlmgr> update --list
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
update:   bchart             [147k]: local:    27496, source:    43928
...
update:   xindy              [535k]: local:    43873, source:    43934
OK
tlmgr> update --all
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
[ 1/22, ??:??/??:??] update: bchart [147k] (27496 -> 43928) ... done
[ 2/22, 00:00/00:00] update: biber [1041k] (43873 -> 43910) ... done
...
[22/22, 00:50/00:50] update: xindy [535k] (43873 -> 43934) ... done
running mktexlsr ...
done running mktexlsr.
...
OK
tlmgr> quit
tlmgr: package log updated: /home/norbert/tl/2017/texmf-var/web2c/tlmgr.log 
[~] 

Please test and report bugs to our mailing list.

Thanks

20 April, 2017 12:51AM by Norbert Preining

April 19, 2017

Reproducible builds folks

Reproducible Builds: week 103 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 9 and Saturday April 15 2017:

Upcoming events

On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds.

Media coverage

Jake Edge wrote a summary of Vagrant Cascadian's talk on Reproducible Builds at LibrePlanet.

Toolchain development and fixes

Ximin Luo forwarded patches to GCC for BUILD_PATH_PREFIX_MAP support.

With this patch to backported to GCC-6, as well as a patched dpkg to set the environment variable, he scheduled ~3,300 packages that are unreproducible in unstable-amd64 but reproducible in testing-amd64 - because we vary the build path in the former but not latter case. Our infrastructure ran these in just under 3 days, and we reproduced ~1,700 extra packages.

This is about 6.5% of ~26,100 Debian source packages, and about 1/2 of the ones whose irreproducibility is due to build-path issues. Most of the rest are not related to GCC, such as things built by R, OCaml, Erlang, LLVM, PDF IDs, etc.

(The dip afterwards, in the graph linked above, is due to reverting back to an unpatched GCC-6, but we'll be rebasing the patch continually over the next few weeks so the graph should stay up.)

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Chris West:

Reviews of unreproducible packages

38 package reviews have been added, 111 have been updated and 85 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been updated:

Added:

Updated:

Removed:

diffoscope development

Development continued in git on the experimental branch:

Chris Lamb:

  • Don't crash on invalid archives (#833697)
  • Tidy up some other code

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)
  • Chris West (1)

Misc.

This week's edition was written by Ximin Luo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

19 April, 2017 09:00PM

Lior Kaplan

Open source @ Midburn, the Israeli burning man

This year I decided to participate in Midburn, the Israeli version of burning man. Whiling thinking of doing something different from my usual habit, I found myself with volunteering in the midburn IT department and getting a task to make it an open source project. Back into my comfort zone, while trying to escape it.

I found a community of volunteers from the Israeli high tech scene who work together for building the infrastructure for Midburn. In many ways, it’s already an open source community by the way it works. One critical and formal fact was lacking, and that’s the license for the code. After some discussion we decided on using Apache License 2.0 and I started the process of changing the license, taking it seriously, making sure it goes “by the rules”.

Our code is available on GitHub at https://github.com/Midburn/. And while it still need to be more tidy, I prefer the release early and often approach. The main idea we want to bring to the Burn infrastructure is using Spark as a database and have already began talking with parallel teams of other burn events. I’ll follow up on our technological agenda / vision. In the mean while, you are more than welcome to comment on the code or join one of the teams (e.g. volunteers module to organize who does which shift during the event).

 

 


Filed under: Israeli Community

19 April, 2017 03:00PM by Kaplan

April 18, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Chinese HDMI-to-SDI converters

I often need to convert signals from HDMI to SDI (and occasionally back). This requires a box of some sort, and eBay obliges; there's a bunch of different sellers of the same devices, selling them around $20–25. They don't seem to have a brand name, but they are invariably sold as 3G-SDI converters (meaning they should go up to 1080p60) and look like this:

There are also corresponding SDI-to-HDMI converters that look pretty much the same except they convert the other way. (They're easy to confuse, but that's not a problem unique tothem.)

I've used them for a while now, and there are pros and cons. They seem reliable enough, and they're 1/4th the price of e.g. Blackmagic's Micro converters, which is a real bargain. However, there are also some issues:

  • For 3G-SDI, they output level A only, with no option for level B. (In fact, there are no options at all.) Level A is the most sane standard, and also what most equipment uses, but there exists certain older equipment that only works with level B.
  • They don't have reclocking chips, so their timing accuracy is not superb. I managed to borrow a Phabrix SDI analyzer and measured the jitter; with a very short cable, I got approximately 0.85–0.95 UI (varying a bit), whereas a Blackmagic converter gave me 0.23–0.24 UI (much more stable). This may be a problem at very long cable lengths, although I haven't tried 100m runs and such.
  • When converting to and from RGB, they seem to assume Rec. 601 Y'CbCr coefficients even for HD resolutions. This means the colors will be a bit off in some cases, although for most people, it will be hard to notice without looking at it side-by-side. (I believe the HDMI-to-SDI and SDI-to-HDMI converters make the same mistake, so that the errors cancel out if you just want to use a pair as HDMI extenders. Also, if your signal is Y'CbCr already, you don't need to care.)
  • They don't insert SMPTE 352M payload ID. (Supposedly, this is because the SDI chip they use, called GV7600, is slightly out-of-standard on purpose in order to avoid paying expensive licensing fees to SMPTE.) Normally, you wouldn't need to care, but 3G-SDI actually requires this, and worse; Blackmagic's equipment (at least the Duo 2, and I've seen reports about the ATEMs as well) enforces that. If you try to run e.g. 1080p50 through them and into a Duo 2, it will be misdetected as “1080p25, no signal”. There's no workaround that I know of.

The last issue is by far the worst, but it only affects 3G-SDI resolutions. 720p60, 1080p30 and 1080i60 all work fine. And to be fair, not even Blackmagic's own converters actually send 352M correctly most of the time…

I wish there were a way I could publish this somewhere people would actually read it before buying these things, but without a name, it's hard for people to find it. They're great value for money, and I wouldn't hesitate to recommend them for almost all use… but then, there's that almost. :-)

18 April, 2017 11:28PM

Vincent Fourmond

make-theme-image: a script to make yourself an idea of a icon theme

I created some time ago a utils repository on github to publish miscellaneous scripts, but it's only recently that I have started to really populate it. One of my recent work is make-theme-image script, that downloads an icon them package, grabs relevant, user-specifiable, icons, and arrange them in a neat montage. The images displayed are the results of running

~ make-theme-image gnome-icon-theme moblin-icon-theme

This is quite useful to get a good idea of the icons available in a package. You can select the icons you want to display using the -w option. The following command should provide you with a decent overview of the icon themes present in Debian:

apt search -- -icon-theme | grep / | cut -d/ -f1 | xargs make-theme-image

I hope you find it useful ! In any case, it's on github, so feel free to patch and share.

18 April, 2017 09:34PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Steve Kemp

Steve Kemp

3d-Printing is cool

I've heard about 3d-printing a lot in the past, although the hype seems to have mostly died down. My view has always been "That seems cool", coupled with "Everybody says making the models is very hard", and "the process itself is fiddly & time-consuming".

I've been sporadically working on a project for a few months now which displays tram-departure times, this is part of my drive to "hardware" things with Arduino/ESP8266 devices . Most visitors to our flat have commented on it, at least once, and over time it has become gradually more and more user-friendly. Initially it was just a toy-project for myself, so everything was hard-coded in the source but over time that changed - which I mentioned here, (specifically the Access-point setup):

  • When it boots up, unconfigured, it starts as an access-point.
    • So you can connect and configure the WiFi network it should join.
  • Once it's up and running you can point a web-browser at it.
    • This lets you toggle the backlight, change the timezone, and the tram-stop.
    • These values are persisted to flash so reboots will remember everything.

I've now wired up an input-button to the device too, experimenting with the different ways that a single button can carry out multiple actions:

  • Press & release - toggle the backlight.
  • Press & release twice - a double-click if you like - show a message.
  • Press, hold for 1 second, then release - re-sync the date/time & tram-data.

Anyway the software is neat, and I can't think of anything obvious to change. So lets move onto the real topic of this post: 3D Printing.

I randomly remembered that I'd heard about an online site holding 3D-models, and on a whim I searched for "4x20 LCD". That lead me to this design, which is exactly what I was looking for. Just like open-source software we're now living in a world where you can get open-source hardware! How cool is that?

I had to trust the dimensions of the model, and obviously I was going to mount my new button into the box, rather than the knob shown. But having a model was great. I could download it, for free, and I could view it online at viewstl.com.

But with a model obtained the next step was getting it printed. I found a bunch of commercial companies, here in Europe, who would print a model, and ship it to me, but when I uploaded the model they priced it at €90+. Too much. I'd almost lost interest when I stumbled across a site which provides a gateway into a series of individual/companies who will print things for you, on-demand: 3dhubs.

Once again I uploaded my model, and this time I was able to select a guy in the same city as me. He printed my model for 1/3-1/4 of the price of the companies I'd found, and sent me fun pictures of the object while it was in the process of being printed.

To recap I started like this:

Then I boxed it in cardboard which looked better than nothing, but still not terribly great:

Now I've found an online case-design for free, got it printed cheaply by a volunteer (feels like the wrong word, after-all I did pay him), and I have something which look significantly more professional:

Inside it looks as neat as you would expect:

Of course the case still cost 5 times as much as the actual hardware involved (button: €0.05, processor-board €2.00 and LCD I2C display €3.00). But I've gone from being somebody who had zero experience with hardware-based projects 4 months ago, to somebody who has built a project which is functional and "pretty".

The internet really is a glorious thing. Using it for learning, and coding is good, using it for building actual physical parts too? That's something I never could have predicted a few years ago and I can see myself doing it more in the future.

Sure the case is a little rough around the edges, but I suspect it is now only a matter of time until I learn how to design my own models. An obvious extension is to add a status-LED above the switch, for example. How hard can it be to add a new hole to a model? (Hell I could just drill it!)

18 April, 2017 09:00PM

hackergotchi for Norbert Preining

Norbert Preining

Gaming: Firewatch

A nice little game, Firewatch, puts you into a fire watch tower in Wyoming, with only a walkie-talkie connecting him to his supervisor Delilah. A so called “first person mystery adventure” with very nice graphics and great atmosphere.

Starting with your trip to the watch tower, the game sends the player into a series of “missions”, during which more and more clues about a mystery disappearance are revealed. The game development is rather straight forward, one has hardly any choices, and it is practically impossible to miss something or fail in some way.

The big plus of the game is the great atmosphere, the funny dialogues with Delilah, the story that pulls you into the game, and the development of the character(s). The tower, the cave, all the places one visits are delicately designed with lots of personality, making this a very human like game.

What is weak is the finish. During the game I was always thinking about whether I should tell Delilah everything, or keep some things secret. But in the end nothing matters, all ends with a simple escape in the helicopter and without any tying up the loose ends. Somehow a pity for such a beautiful game to leave the player somehow unsatisfied at the end.

But although the finish wasn’t that good, I still enjoyed it more than I expected. Due to the simple flow it wont keep you busy for many hours, but as a short break a few evenings (for me), it was a nice break from all the riddle games I love so much.

18 April, 2017 03:04PM by Norbert Preining

Bits from Debian

Call for Proposals for DebConf17 Open Day

The DebConf team would like to call for proposals for the DebConf17 Open Day, a whole day dedicated to sessions about Debian and Free Software, and aimed at the general public. Open Day will preceed DebConf17 and will be held in Montreal, Canada, on August 5th 2017.

DebConf Open Day will be a great opportunity for users, developers and people simply curious about our work to meet and learn about the Debian Project, Free Software in general and related topics.

Submit your proposal

We welcome submissions of workshops, presentations or any other activity which involves Debian and Free Software. Activities in both English and French are accepted.

Here are some ideas about content we'd love to offer during Open Day. This list is not exhaustive, feel free to propose other ideas!

  • An introduction to various aspects of the Debian Project
  • Talks about Debian and Free Software in art, education and/or research
  • A primer on contributing to Free Software projects
  • Free software & Privacy/Surveillance
  • An introduction to programming and/or hardware tinkering
  • A workshop about your favorite piece of Free Software
  • A presentation about your favorite Free Software-related project (user group, advocacy group, etc.)

To submit your proposal, please fill the form at https://debconf17.debconf.org/talks/new/

Volunteer

We need volunteers to help ensure Open Day is a success! We are specifically looking for people familiar with the Debian installer to attend the Debian installfest, as resources for people seeking help to install Debian on their devices. If you're interested, please add your name to our wiki: https://wiki.debconf.org/wiki/DebConf17/OpenDay#Installfest

Attend

Participation to Open Day is free and no registration is required.

The schedule for Open Day will be announced in June 2017.

DebConf17 logo

18 April, 2017 07:00AM by DebConf team

April 17, 2017

Sylvain Beucler

Practical basics of reproducible builds 3

On my quest to generate reproducible standalone binaries for GNU FreeDink, I met new friends but currently lie defeated by an unexpected enemy...

Episode 1:

  • compiler version needs to be identical and recorded
  • build options and their order need to be identical and recorder
  • build path needs to be identical and recorded (otherwise debug symbols - and BuildIDs - change)
  • diffoscope helps checking for differences in build output

Episode 2:

  • use -Wl,--no-insert-timestamp for .exe (with old binutils 2.25 caveat)
  • no need to set a build path for stripped .exe (no ELF BuildID)
  • reprotest helps checking build variations automatically
  • MXE stack is apparently deterministic enough for a reproducible static build
  • umask needs to be identical and recorded
  • file timestamps needs to be set and recorded (more on this in a future episode)

First, the random build differences when using -Wl,--no-insert-timestamp were explained.
peanalysis shows random build dates:

$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello && analysePE.py hello | tee /tmp/hello.log-$(date +%s); sleep 1' 'hello'
$ diff -au /tmp/hello.log-1*
--- /tmp/hello.log-1490950327   2017-03-31 10:52:07.788616930 +0200
+++ /tmp/hello.log-1523203509   2017-03-31 10:52:09.064633539 +0200
@@ -18,7 +18,7 @@
 found PE header (size: 20)
     machine: i386
     number of sections: 17
-    timedatestamp: -1198218512 (Tue Jan 12 05:31:28 1932)
+    timedatestamp: 632430928 (Tue Jan 16 09:15:28 1990)
     pointer to symbol table: 4593152 (0x461600)
     number of symbols: 11581 (0x2d3d)
     size of optional header: 224
@@ -47,7 +47,7 @@
     Win32VersionValue: 0
     size of image (memory): 4640768
     size of headers (offset to first section raw data): 1536
-    checksum (for drivers): 4927867
+    checksum (for drivers): 4922616
     subsystem: 3
         win32 console binary
     DllCharacteristics: 0

Stephen Kitt mentioned 2 simple patches (1 2) fixing uninitialized memory in binutils.

These patches fix the variation and were submitted to MXE (pull request).


Next was playing with compiler support for SOURCE_DATE_EPOCH (which e.g. sets __DATE__ macros).
The FreeDink DFArc frontend historically displays a build date in the About box:

    "Build Date: %s\n", ..., __TDATE__

sadly support is only landing upstream in GCC 7 :/
I had to remove that date.


Now comes the challenging parts.

All my tests with reprotest checked. I started writing a reproducible build environment based on Docker (git browse).
At first I could not run reprotest in the container, so I reworked it with SSH support, and reprotest validated determinism.
(I also generate a reproducible .zip archive, more on that later.)

So far so good, but were the release identical when running reprotest successively on the different environments?
(reminder: this is a .exe build that is insensitive to varying path, hence consistent in a full reprotest)

$ sha256sum *.zip
189d0ca5240374896c6ecc6dfcca00905ae60797ab48abce2162fa36568e7cf1  freedink-109.0-bin-buildsh.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-docker.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-reprotest-docker.zip
37007f6ee043d9479d8c48ea0a861ae1d79fb234cd05920a25bb3db704828ece  freedink-109.0-bin-reprotest-null.zip

Ouch! Even though both the Docker and my host are running Stretch, there are differences.


For the two host builds (direct and reprotest), there is a subtle but simple difference: HOME.
HOME is invariably non-existant in reprotest, while my normal compilation environment has an existing home (duh!).

This caused a subtle bug when cross-compiling with mingw and wine-binfmt:

  • existing home: ./configure attempts to run conftest.exe, wine can create ~/.wine, conftest.exe runs with binfmt emulation, configure assumes:
  checking whether we are cross compiling... no
  • non-existing home: ./configure attempts to run conftest.exe, wine can't create ~/.wine, conftest.exe fails, configure assumes:
  checking whether we are cross compiling... yes

The respective binaries were very different notably due to a different config.h.
This can be fixed by specifying --build in addition to --host when calling ./configure.

I suggested reprotest have one of the tests with a valid HOME (#860428).


Now comes the big one, after the fix I still got:

$ sha256sum *.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-buildsh.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-chroot.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-reprotest-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-null.zip

There is consistency on my host, and consistency within docker, but both are different.
Moreover, all the .o files were identical, so something must have gone wrong when compiling the libs, that is MXE.

After many checks it appears that libstdc++.a is different.
Just overwriting it gets me a consistent FreeDink release on all environments.
Still, when rebuilding it (make gcc), libstdc++.a always has the same environment-dependent checksum.

45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # host
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # docker

The 2 libraries are much different, there's barely any blue in hexcompare.

At that time, I realized that the Docker "official" Debian are not really official, as Joey explains.
Could it be that Docker maliciously tampered with the compiler as Ken Thompson warned in 1984??

Well before jumping to conclusion let's mix & match.

  • First I rsync a copy of my Docker filesystem and run it in a host chroot with a reset environment.
$ sudo env -i /usr/sbin/chroot chroot-docker/
$ exec bash -l
$ cd /opt/mxe
$ touch src/gcc.mk
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709464 KiB    7m2.039s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with host builds
  • Then I import my previous reprotest chroot (plain debootstrap) in Docker:
$ sudo tar -C chroot -c . | docker import - chroot-debootstrap
$ docker run -ti chroot-debootstrap /bin/bash
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ touch src/gcc.mk
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709412 KiB    7m6.608s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with docker builds

So, AFAICS when building with:

  • exactly the same kernel
  • exactly the same GCC sources
  • exactly the same host binaries

then depending on whether running in a container or not we get a consistent but different libstdc++.a.

This kind of issue is not detected with a simple reprotest build, as it only tests variations within a fixed build environment.
This is quite worrisome, I intend to use a container to control my build environment, but I can't guarantee that the container technology will be exactly the same 5 years from now.

All my setup is simple and available for inspection at https://git.savannah.gnu.org/cgit/freedink.git/tree/autobuild/freedink-w32-snapshot/.

I'd very much welcome enlightenment :)

17 April, 2017 03:25PM

hackergotchi for Norbert Preining

Norbert Preining

Systemd again (or how to obliterate your system)

Ok, I have been silent about systemd and its being forced onto us in Debian like force-feeding Foie gras gooses. I have complained about systemd a few times (here and here), but what I read today really made me loose my last drips of trust I had in this monster-piece of software.

If you are up for some really surprising read about the main figure behind systemd, enjoy this github issue. It’s about a bug that simply does the equivalent of rm -rf / in some cases. The OP gave clear indications, the bug was fixes immediately, but then a comment from the God Poettering himself appeared that made the case drip over:

I am not sure I’d consider this much of a problem. Yeah, it’s a UNIX pitfall, but “rm -rf /foo/.*” will work the exact same way, no?Lennart Poettering, systemd issue 5644

Well, no, a total of 1min would have shown him that this is not the case. But we trust this guy the whole management of the init process, servers, logs (and soon our toilet and fridge management, X, DNS, whatever you ask for).

There are two issues here: One is that such a bug is lurking in systemd since probably years. The reason is simple – we pay with these kinds of bugs for the incredible complexity increase of the init process which takes over too much services. Referring back to the Turing Award lecture given by Hoare, we see that systemd took the later path:

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. Antony Hoare, Turing Award Lecture 1980

The other issue is how systemd developers deal with bug reports. I have reported several cases here, this is just another one: Close the issue for comments, shut up, put it under the carpet.

(Image credit: The musings of an Indian Faust)

17 April, 2017 03:05PM by Norbert Preining

hackergotchi for Ross Gammon

Ross Gammon

My March 2017 Activities

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time.

Debian

  •  Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
  • Uploaded the latest version of abcmidi (also to experimental).
  • Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
  • Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.

Ubuntu

  • Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
  • Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
  • Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.

Other

  • Measured up the new model railway layout and documented it in xtrkcad.
  • Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
  • Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
  • Started looking at creating a Django website to store and publish my One Name Study sources (indexes).  Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.

Plan status from last month & update for next month

Debian

For the Debian Stretch release:

  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress

Generally:

  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In Progress
  • Begin working again on all the new stuff I want packaged in Debian.

Ubuntu

  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – In progress
  • Test Len’s work on ubuntustudio-controls – Done
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release. – Done
  • Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.

Other

  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress

17 April, 2017 02:35PM by Ross Gammon

Russell Coker

More KVM Modules Configuration

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

17 April, 2017 10:07AM by etbe

hackergotchi for Norbert Preining

Norbert Preining

Calibre on Debian

Calibre is the prime open source e-book management program, but the Debian releases often lag behind the official releases. Furthermore, the Debian packages remove support for rar packed e-books, which means that several comic book formats cannot be handled.

Thus, I have published a local repository targeting Debian/sid of calibre with binaries for amd64 where rar is enabled and as far as possible the latest version is included.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

17 April, 2017 01:33AM by Norbert Preining

April 16, 2017

Antoine Beaupré

Montreal Bug Squashing Party report

Un sommaire de cet article est également traduit vers le français, merci!

Last friday, a group of Debian users, developers and enthusiasts met at Koumbit.org offices for a bug squashing party. We were about a dozen people of various levels: developers, hackers and users.

I gave a quick overview of Debian packaging using my quick development guide, which proved to be pretty useful. I made a deb.li link (https://deb.li/quickdev) for people to be able to easily find the guide on their computers.

Then I started going through a list of different programs used to do Debian packaging, to try and see the level of the people attending:

  • apt-get install - everyone knew about it
  • apt-get source - everyone paying attention
  • dget - only 1 knew about it
  • dch - 1
  • quilt - about 2
  • apt-get build-dep - 1
  • dpkg-buildpackage - only 3 people
  • git-buildpackage / gitpkg - 1
  • sbuild / pbuilder
  • dput - 1
  • rmadison - 0 (the other DD wasn't paying attention anymore)

So mostly skilled Debian users (they know apt-get source) but not used to packaging (they don't know about dpkg-buildpackage). So I went through the list again and explained how they all fit together and could be used to work on Debian packages in the context of a Debian release bug squashing party. This was the fastest crash course in Debian packaging I have ever given (and probably the first too) - going through those tools in about 30 minutes. I was happy to have the guide that people could refer to later in the back.

The first question after the presentation was "how do we find bugs"? which led me to add links to the UDD bugs page and release-critical bugs page. I also explained the key links on top of the UDD page to find specific sets of bugs, and explained the useful "patch" filter that allows to select bugs with our without patch.

I guess that maybe half of the people were able to learn new, or improve their skills to make significant contributions or test actual patches. Other learned how to hunt and triage bugs in the BTS.

Update: sorry for the wording: all contributions were really useful, thanks and apologies to bug hunters!!

I myself learned how to use sbuild thanks to the excellent sbuild wiki page which I improved upon. A friend was able to pick up sbuild very quickly and use it to build a package for stretch, which I find encouraging: my first experience with pbuilder was definitely not as good. I have therefore starting the process of switching my build chroots to sbuild, which didn't go so well on Jessie because I use a backported kernel, and had to use the backported sbuild as well. That required a lot of poking around, so I ended up just using pbuilder for now, but I will definitely switch on my home machine, and I updated the sbuild wiki page to give out more explanations on how to setup pbuilder.

We worked on a bunch of bugs, and learned how to tag them as part of the BSP, which was documented in the BSP wiki page. It seems we have worked on about 11 different bugs which is a better average than the last BSP that I organized, so I'm pretty happy with that.

More importantly, we got Debian people together to meet and talk, over delicious pizza, thanks to a sponsorship granted by the DPL. Some people got involved in the next DebConf which is also great.

On top of fixing bugs and getting people involved in Debian, my third goal was to have fun, and fun we certainly had. I didn't work on as many bugs as I expected myself, achieving only one upload in the end, but since I was answering so many questions left and right, I felt useful and that is certainly gratifying. Organization was simple enough: just get a place, send invites and get food, and the rest is just sharing knowledge and answering questions.

Thanks everyone for coming, and let's do this again soon!

16 April, 2017 07:19PM

Bits from Debian

DPL elections 2017, congratulations Chris Lamb!

The Debian Project Leader elections finished yesterday and the winner is Chris Lamb!

Of a total of 1062 developers, 322 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2017 page.

The current Debian Project Leader, Mehdi Dogguy, congratulated Chris Lamb in his Final bits from the (outgoing) DPL message. Thanks, Mehdi, for the service as DPL during this last twelve months!

The new term for the project leader starts on April 17th and expires on April 16th 2018.

16 April, 2017 04:40PM by Laura Arjona Reina

hackergotchi for Chris Lamb

Chris Lamb

Elected Debian Project Leader

I'd like to thank the entire Debian community for choosing me to represent them as the next Debian Project Leader.

I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.

You can read my platform here.


16 April, 2017 12:52PM

April 15, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 1000 CRAN packages

800 Rcpp packages

Moments ago Rcpp passed a big milestone as there are now 1000 packages on CRAN depending on it (as measured by Depends, Imports and LinkingTo, but excluding Suggests). The graph is on the left depicts the growth of Rcpp usage over time.

One easy way to compute such reverse dependency counts is the tools::dependsOnPkgs() function that was just mentioned in yesterday's R^4 blog post. Another way is to use the reverse_dependencies_with_maintainers() function from this helper scripts file on CRAN. Lastly, devtools has a function revdep() but it has the wrong default parameters as it includes Suggests: which you'd have to override to get the count I use here (it currently gets 1012 in this wider measure).

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July, 800 packages last October and 900 packages early January. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, and nine percent mid-December 2016. Ten percent is next; we may get there during the summer.

1000 user packages is a really large number. This puts a whole lot of responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 April, 2017 09:28PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

On Dmitry Bogatov and empowering privacy-protecting tools

There is a thorny topic we have been discussing in nonpublic channels (say, the debian-private mailing list... It is impossible to call it a private list if it has close to a thousand subscribers, but it sometimes deals with sensitive material) for the last week. We have finally confirmation that we can bring this topic out to the open, and I expect several Debian people to talk about this. Besides, this information is now repeated all over the public Internet, so I'm not revealing anything sensitive. Oh, and there is a statement regarding Dmitry Bogatov published by the Tor project — But I'll get to Tor soon.

One week ago, the 25-year old mathematician and Debian Maintainer Dmitry Bogatov was arrested, accused of organizing riots and calling for terrorist activities. Every evidence so far points to the fact that Dmitry is not guilty of what he is charged of — He was filmed at different places at the times where the calls for terrorism happened.

It seems that Dmitry was arrested because he runs a Tor exit node. I don't know the current situation in Russia, nor his political leanings — But I do know what a Tor exit node looks like. I even had one at home for a short while.

What is Tor? It is a network overlay, meant for people to hide where they come from or who they are. Why? There are many reasons — Uninformed people will talk about the evil wrongdoers (starting the list of course with the drug sellers or child porn distributors). People who have taken their time to understand what this is about will rather talk about people for whom free speech is not a given; journalists, political activists, whistleblowers. And also, about regular people — Many among us have taken the habit of doing some of our Web surfing using Tor (probably via the very fine and interesting TAILS distribution — The Amnesiac Incognito Live System), just to increase the entropy, and just because we can, because we want to preserve the freedom to be anonymous before it's taken away from us.

There are many types of nodes in Tor; most of them are just regular users or bridges that forward traffic, helping Tor's anonymization. Exit nodes, where packets leave the Tor network and enter the regular Internet, are much scarcer — Partly because they can be quite problematic to people hosting them. But, yes, Tor needs more exit nodes, not just for bandwidth sake, but because the more exit nodes there are, the harder it is for a hostile third party to monitor a sizable number of them for activity (and break the anonymization).

I am coincidentially starting a project with a group of students of my Faculty (we want to breathe life again into LIDSOL - Laboratorio de Investigación y Desarrollo de Software Libre). As we are just starting, they are documenting some technical and social aspects of the need for privacy and how Tor works; I expect them to publish their findings in El Nigromante soon (which means... what? ☺ ), but definitively, part of what we want to do is to set up a Tor exit node at the university — Well documented and with enough academic justification to avoid our network operation area ordering us to shut it down. Lets see what happens :)

Anyway, all in all — Dmitry is in for a heavy time. He has been detained pre-trial at least until June, and he faces quite serious charges. He has done a lot of good, specialized work for the whole world to benefit. So, given I cannot do more, I'm just speaking my mind here in this space.

[Update] Dmitry's case has been covered in LWN. There is also a statement concerning the arrest of Dmitry Bogatov by the Debian project. This case is also covered at The Register.

15 April, 2017 04:53AM by gwolf

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#5: Easy package information

Welcome to the fifth post in the recklessly rambling R rants series, or R4 for short.

The third post showed an easy way to follow R development by monitoring (curated) changes on the NEWS file for the development version r-devel. As a concrete example, I mentioned that it has shown a nice new function (tools::CRAN_package_db()) coming up in R 3.4.0. Today we will build on that.

Consider the following short snippet:

library(data.table)

getPkgInfo <- function() {
    if (exists("tools::CRAN_package_db")) {
        dat <- tools::CRAN_package_db()
    } else {
        tf <- tempfile()
        download.file("https://cloud.r-project.org/src/contrib/PACKAGES.rds", tf, quiet=TRUE)
        dat <- readRDS(tf)              # r-devel can now readRDS off a URL too
    }
    dat <- as.data.frame(dat)
    setDT(dat)
    dat
}

It defines a simple function getPkgInfo() as a wrapper around said new function from R 3.4.0, ie tools::CRAN_package_db(), and a fallback alternative using a tempfile (in the automagically cleaned R temp directory) and an explicit download and read of the underlying RDS file. As an aside, just this week the r-devel NEWS told us that such readRDS() operations can now read directly from URL connection. Very nice---as RDS is a fantastic file format when you are working in R.

Anyway, back to the RDS file! The snippet above returns a data.table object with as many rows as there are packages on CRAN, and basically all their (parsed !!) DESCRIPTION info and then some. A gold mine!

Consider this to see how many package have a dependency (in the sense of Depends, Imports or LinkingTo, but not Suggests because Suggests != Depends) on Rcpp:

R> dat <- getPkgInfo()
R> rcppRevDepInd <- as.integer(tools::dependsOnPkgs("Rcpp", recursive=FALSE, installed=dat))
R> length(rcppRevDepInd)
[1] 998
R>

So exciting---we will hit 1000 within days! But let's do some more analysis:

R> dat[ rcppRevDepInd, RcppRevDep := TRUE]  # set to TRUE for given set
R> dat[ RcppRevDep==TRUE, 1:2]
           Package Version
  1:      ABCoptim  0.14.0
  2: AbsFilterGSEA     1.5
  3:           acc   1.3.3
  4: accelerometry   2.2.5
  5:      acebayes   1.3.4
 ---                      
994:        yakmoR   0.1.1
995:  yCrypticRNAs  0.99.2
996:         yuima   1.5.9
997:           zic     0.9
998:       ziphsmm   1.0.4
R>

Here we index the reverse dependency using the vector we had just computed, and then that new variable to subset the data.table object. Given the aforementioned parsed information from all the DESCRIPTION files, we can learn more:

R> ## likely false entries
R> dat[ RcppRevDep==TRUE, ][NeedsCompilation!="yes", c(1:2,4)]
            Package Version                                                                         Depends
 1:         baitmet   1.0.0                                                           Rcpp, erah (>= 1.0.5)
 2:           bea.R   1.0.1                                                        R (>= 3.2.1), data.table
 3:            brms   1.6.0                     R (>= 3.2.0), Rcpp (>= 0.12.0), ggplot2 (>= 2.0.0), methods
 4: classifierplots   1.3.3                             R (>= 3.1), ggplot2 (>= 2.2), data.table (>= 1.10),
 5:           ctsem   2.3.1                                           R (>= 3.2.0), OpenMx (>= 2.3.0), Rcpp
 6:        DeLorean   1.2.4                                                  R (>= 3.0.2), Rcpp (>= 0.12.0)
 7:            erah   1.0.5                                                               R (>= 2.10), Rcpp
 8:             GxM     1.1                                                                              NA
 9:             hmi   0.6.3                                                                    R (>= 3.0.0)
10:        humarray     1.1 R (>= 3.2), NCmisc (>= 1.1.4), IRanges (>= 1.22.10),\nGenomicRanges (>= 1.16.4)
11:         iNextPD   0.3.2                                                                    R (>= 3.1.2)
12:          joinXL   1.0.1                                                                    R (>= 3.3.1)
13:            mafs   0.0.2                                                                              NA
14:            mlxR   3.1.0                                                           R (>= 3.0.1), ggplot2
15:    RmixmodCombi     1.0              R(>= 3.0.2), Rmixmod(>= 2.0.1), Rcpp(>= 0.8.0), methods,\ngraphics
16:             rrr   1.0.0                                                                    R (>= 3.2.0)
17:        UncerIn2     2.0                          R (>= 3.0.0), sp, RandomFields, automap, fields, gstat
R> 

There are a full seventeen packages which claim to depend on Rcpp while not having any compiled code of their own. That is likely false---but I keep them in my counts, however relunctantly. A CRAN-declared Depends: is a Depends:, after all.

Another nice thing to look at is the total number of package that declare that they need compilation:

R> ## number of packages with compiled code
R> dat[ , .(N=.N), by=NeedsCompilation]
   NeedsCompilation    N
1:               no 7625
2:              yes 2832
3:               No    1
R>

Isn't that awesome? It is 2832 out of (currently) 10458, or about 27.1%. Just over one in four. Now the 998 for Rcpp look even better as they are about 35% of all such packages. In order words, a little over one third of all packages with compiled code (which may be legacy C, Fortran or C++) use Rcpp. Wow.

Before closing, one shoutout to Dirk Schumacher whose thankr which I made the center of the last post is now on CRAN. As a mighty fine and slim micropackage without external dependencies. Neat.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 April, 2017 12:56AM

April 14, 2017

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Underestimating Debian

I had two issues in the last days that lead me a bit into panic until they got solved. In both cases the issue was external to Debian but I first thought that the problem was in Debian. I’m not sure why I had those thoughts, I should be more confident in myself, this awesome operating system, and the community around it! The good thing is that I’ll be more confident from now on, and I’ve learned that hurry is not a good friend, and I should face my computer “problems” (and everything in life, probably) with a bit more patience (and backups).

Issue 1: Corrupt ext partition in a laptop

I have a laptop at home with dual boot Windows 7 + Debian 9 (Stretch). I rarely boot the Windows partition. When I do, I do whatever I need to do/test there, then install updates, and then shutdown the laptop or reboot in Debian to feel happy again when using computers.

Some months ago I noticed that booting in Debian was not possible and I was left in an initramfs console that was suggesting to e2fsck /dev/sda6 (my Debian partition). Then I ran e2fsck, say “a” to fix all the issues found, and the system was booting properly. This issue was a bit scary-looking because of the e2fsck output making screen show random numbers and scrolling quickly for 1 or 2 minutes, until all the inodes or blocks or whatever were fixed.

I thought about the disk being faulty, and ran badblocks, but faced the former boot issue again some time after, and then decided to change the disk (then I took the opportunity to make backups, and install a fresh Debian 9 Stretch in the laptop, instead of the Debian 8 stable that was running).

The experience with Stretch has been great since then, but some days ago I faced the boot issue again. Then I realised that maybe the issue was appearing when I booted Debian right after using Windows (and this was why it was appearing not very often in my timeline 😉 ). Then I payed more attention to the message that I was receiving in the console

Superblock checksum does not match superblock while trying to open /dev/sda6
 /dev/sda6:
 The superblock could not be read or does not describe a valid ext2/ext3/ext4
 filesystem. If the device is valid and it really contains an ext2/ext3/ext4
 filesystem (and not swap or ufs or something else), then the superblock
 is corrupt, and you might try running e2fsck with an alternate superblock:
 e2fsck -b 8193
 or
 e2fsck -b 32768

and searched about it, and also asked about it to my friends in the redeslibres XMPP chat room 🙂

I found this question in the AskUbuntu forum that was exactly my issue (I had ext2fsd installed in Windows). My friends in the XMPP room friendly yelled “booo!” at me for letting Windows touch my ext partitions (I apologised, it will never happen again!). I now consistently could reproduce the issue (boot Windows, then boot Debian, bang!: initramfs console, e2fsck, reboot Debian, no problem, boot Windows, boot Debian, again the problem, etc). I uninstalled the ext2fsd program and tried to reproduce the issue, and I couldn’t reproduce it. So happy end.

Issue 2: Accessing Android internal memory to backup files

The other issue was with my tablet running Android 4.0.4. It was facing a charge issue, and I wanted to backup the files there before sending it to repair. I connected the tablet with USB to my laptop, and enabled USB debugging. The laptop recognized a MZ604 ‘camera’ connected, but Dolphin (the file browser of my KDE Plasma desktop) could not show the files.

I looked at the settings in the tablet to try to find the setting that allowed me to switch between camera/MTP when connecting with USB, but couldn’t find it. I guessed that the tablet was correctly configured because I recall having made a backup some months ago, with no hassle… (in Debian 8). I checked that my Debian (9) had installed the needed packages:

 ii kio-mtp 0.75+git20140304-2 amd64 access to MTP devices for applications using the KDE Platform
 ii libmtp-common 1.1.12-1 all Media Transfer Protocol (MTP) common files
 ii libmtp-runtime 1.1.12-1+b1 amd64 Media Transfer Protocol (MTP) runtime tools
 ii libmtp9:amd64 1.1.12-1+b1 amd64 Media Transfer Protocol (MTP) library

So I had no idea about what was going on. Then I suspected some problem in my Debian (maybe I was needing some driver for the Motorola tablet?) and booted Windows 7 to see what happened there.

Windows detected a MZ604 device too, but couldn’t access the files either (when clicking in the device, no folders were shown). I began to search the internet to see if there were some Motorola drivers out there, and then found the clue to enable the correct settings in the Android device: you need to go to Settings > Storage, and then press the 3-dots button that makes the “Menu” function, and then appears “USB computer connection” and there, you can enable Camera or MTP. Very hidden setting! I enabled MTP, and then I could see the folders and files in my Windows system (without need of installing any additional driver), and make my backup. And of course after rebooting and trying in Debian, it worked too.

Some outcomes/conclusions

  • I have a spare hard disk for backups, tests, whatever.
  • I should make backups more often (and organize my files too). Then I wouldn’t be so nervous when facing connection or harddrive issues.
  • I won’t let my Windows touch my Debian partitions. I don’t say ext2fsd is bad, but I installed it “just in case” and in practice I never felt the need to use it. So no need to risk (again) a corrupt ext partition.
  • Having a Windows system at hand is useful some times to demonstrate myself (and maybe others) that the problems aren’t usually related to Debian or other GNU/Linux.
  • Having some more patient is useful too to demonstrate myself (and maybe others) that the problems aren’t usually related to Debian or other GNU/Linux.
  • Maybe I should put aside some money in my budget for collateral damages of my computer tinkering, or renew hardware at some time (before it definitely breaks, and not after). For example if I had renewed this tablet (it’s a good one, but from 2011, Android 4, and the screen is broken, and it was not charging since one year, we were using it only plugged to AC), then my family wouldn’t care if I “break the old tablet” trying to unlock its bootloader or install Debian on it or whatever. The same for my husband’s laptop (the one with the dual boot), it’s an old machine already, but it’s his only computer. I already felt risky installing Debian testing on it! (I installed it in end-january, right before the full-freeze).
  • OTOH, even thinking about renewing hardware made me headache. My family show advertisements from the shopping mall and I don’t know if I can install Debian without nonfree blobs, or Replicant or LineageOS on those devices. I don’t know the max volume that the ringtone reaches, and the max volume of the laptop speakers, or the lower possible brightness of the screens. I’m picky about laptop keyboards. I don’t like to spend much money in hardware that can be destroyed easily because it falls down from my hand to the floor, or I accidentally throw coffee on it. So I end enlarging the life of my current hardware, even if I don’t like it much, either…

Comments?

You can comment on this post using this pump.io thread.


Filed under: My experiences and opinion Tagged: Android, Debian, English, KDE, Libre software for Windows

14 April, 2017 08:19PM by larjona

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.800.2.0

armadillo image

A new RcppArmadillo version 0.7.800.2.0 is now on CRAN.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 318 other packages on CRAN -- an increase of 20 just since the last CRAN release of 0.7.600.1.0 in December!

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.800.2.0 (2017-04-12)

  • Upgraded to Armadillo release 7.800.2 (Rogue State Redux)

    • The Armadillo license changed to Apache License 2.0
  • The DESCRIPTION file now mentions the Apache License 2.0, as well as the former MPL2 license used for earlier releases.

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • The fastLm example was updated

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 April, 2017 02:14AM

April 13, 2017

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, March 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 190 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Antoine Beaupré did 19 hours (out of 14.75h allocated + 10 remaining hours, thus keeping 5.75 extra hours for April).
  • Balint Reczey did nothing (out of 14.75 hours allocated + 2.5 hours remaining) and gave back all his unused hours. He took on a new job and will stop his work as LTS paid contributor.
  • Ben Hutchings did 14.75 hours.
  • Brian May did 10 hours.
  • Chris Lamb did 14.75 hours.
  • Emilio Pozuelo Monfort did 11.75 hours (out of 14.75 hours allocated + 0.5 hours remaining, thus keeping 3.5 hours for April).
  • Guido Günther did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for April).
  • Hugo Lefeuvre did 4 hours (out of 13.5 hours allocated, thus keeping 9.5 extra hours for April).
  • Jonas Meurer did 11.25 hours (out of 14.75 hours allocated, thus keeping 3.5 extra hours for April).
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 23.75 hours (out of 14.75h allocated + 9 hours remaining).
  • Raphaël Hertzog did 15 hours (out of 10 hours allocated + 6.25 hours remaining, thus keeping 1.25 hours for April).
  • Roberto C. Sanchez did 21.5 hours (out of 14.75 hours allocated + 7.75 hours remaining, thus keeping 1 extra hour for April).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours has been unchanged but will likely decrease slightly next month as one sponsor will not renew his support (because they have switched to CentOS).

The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 40. The number of open issues continued its slight increase… not worrisome yet but we need to keep an eye on this situation.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

13 April, 2017 04:05PM by Raphaël Hertzog

hackergotchi for Francois Marier

Francois Marier

Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash

/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

13 April, 2017 03:00PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.13.1

Weblate 2.13.1 has been released quickly after 2.13. It fixes few minor issues and possible upgrade problem.

Full list of changes:

  • Fixed listing of managed projects in profile.
  • Fixed migration issue where some permissions were missing.
  • Fixed listing of current file format in translation download.
  • Return HTTP 404 when trying to access project where user lacks privileges.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate | 0 comments

13 April, 2017 04:00AM

April 12, 2017

hackergotchi for Vincent Bernat

Vincent Bernat

Proper isolation of a Linux bridge

TL;DR: when configuring a Linux bridge, use the following commands to enforce isolation:

# bridge vlan del dev br0 vid 1 self
# echo 1 > /sys/class/net/br0/bridge/vlan_filtering

A network bridge (also commonly called a “switch”) brings several Ethernet segments together. It is a common element in most infrastructures. Linux provides its own implementation.

A typical use of a Linux bridge is shown below. The hypervisor is running three virtual hosts. Each virtual host is attached to the br0 bridge (represented by the horizontal segment). The hypervisor has two physical network interfaces:

  • eth0 is attached to a public network providing various services for the virtual hosts (DHCP, DNS, NTP, routers to Internet, …). It is also part of the br0 bridge.
  • eth1 is attached to an infrastructure network providing various services to the hypervisor (DNS, NTP, configuration management, routers to Internet, …). It is not part of the br0 bridge.

Typical use of Linux bridging with virtual machines

The main expectation of such a setup is that while the virtual hosts should be able to use resources from the public network, they should not be able to access resources from the infrastructure network (including resources hosted on the hypervisor itself, like a SSH server). In other words, we expect a total isolation between the green domain and the purple one.

That’s not the case. From any virtual host:

# ip route add 192.168.14.3/32 dev eth0
# ping -c 3 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=59 time=0.644 ms
64 bytes from 192.168.14.3: icmp_seq=2 ttl=59 time=0.829 ms
64 bytes from 192.168.14.3: icmp_seq=3 ttl=59 time=0.894 ms

--- 192.168.14.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.644/0.789/0.894/0.105 ms

Why?

There are two main factors of this behavior:

  1. A bridge can accept IP traffic. This is a useful feature if you want Linux to act as a bridge and provide some IP services to bridge users (a DHCP relay or a default gateway). This is usually done by configuring the IP address on the bridge device: ip addr add 192.0.2.2/25 dev br0.
  2. An interface doesn’t need an IP address to process incoming IP traffic. Additionally, by default, Linux accepts to answer ARP requests independently from the incoming interface.

Bridge processing

After turning an incoming Ethernet frame into a socket buffer, the network driver transfers the buffer to the netif_receive_skb() function. The following actions are executed:

  1. copy the frame to any registered global or per-device taps (e.g. tcpdump),
  2. evaluate the ingress policy (configured with tc),
  3. hand over the frame to the device-specific receive handler, if any,
  4. hand over the frame to a global or device-specific protocol handler (e.g. IPv4, ARP, IPv6).

For a bridged interface, the kernel has configured a device-specific receive handler, br_handle_frame(). This function won’t allow any additional processing in the context of the incoming interface, except for STP and LLDP frames or if “brouting” is enabled1. Therefore, the protocol handlers are never executed in this case.

After a few additional checks, Linux will decide if the frame has to be locally delivered:

  • the entry for the target MAC in the FDB is marked for local delivery, or
  • the target MAC is a broadcast or a multicast address.

In this case, the frame is passed to the br_pass_frame_up() function. A VLAN-related check is optionally performed. The socket buffer is attached to the bridge interface (br0) instead of the physical interface (eth0), is evaluated by Netfilter and sent back to netif_receive_skb(). It will go through the four steps a second time.

IPv4 processing

When a device doesn’t have a protocol-independent receive handler, a protocol-specific handler will be used:

# cat /proc/net/ptype
Type Device      Function
0800          ip_rcv
0011          llc_rcv [llc]
0004          llc_rcv [llc]
0806          arp_rcv
86dd          ipv6_rcv

Therefore, if the Ethernet type of the incoming frame is 0x800, the socket buffer is handled by ip_rcv(). Among other things, the three following steps will happen:

  • If the frame destination address is not the MAC address of the incoming interface, not a multicast one and not a broadcast one, the frame is dropped (“not for us”).
  • Netfilter gets a chance to evaluate the packet (in a PREROUTING chain).
  • The routing subsystem will decide the destination of the packet in ip_route_input_slow(): is it a local packet, should it be forwarded, should it be dropped, should it be encapsulated? Notably, the reverse-path filtering is done during this evaluation in fib_validate_source().

Reverse-path filtering (also known as uRPF, or unicast reverse-path forwarding, RFC 3704) enables Linux to reject traffic on interfaces which it should never have originated: the source address is looked up in the routing tables and if the outgoing interface is different from the current incoming one, the packet is rejected.

ARP processing

When the Ethernet type of the incoming frame is 0x806, the socket buffer is handled by arp_rcv().

  • Like for IPv4, if the frame is not for us, it is dropped.
  • If the incoming device has the NOARP flag, the frame is dropped.
  • Netfilter gets a chance to evaluate the packet (configuration is done with arptables).
  • For an ARP request, the values of arp_ignore and arp_filter may trigger a drop of the packet.

IPv6 processing

When the Ethernet type of the incoming frame is 0x86dd, the socket buffer is handled by ipv6_rcv().

  • Like for IPv4, if the frame is not for us, it is dropped.
  • If IPv6 is disabled on the interface, the packet is dropped.
  • Netfilter gets a chance to evaluate the packet (in a PREROUTING chain).
  • The routing subsystem will decide the destination of the packet. However, unlike IPv4, there is no reverse-path filtering2.

Workarounds

There are various methods to fix the situation.

We can completely ignore the bridged interfaces: as long as they are attached to the bridge, they cannot process any upper layer protocol (IPv4, IPv6, ARP). Therefore, we can focus on filtering incoming traffic from br0.

It should be noted that for IPv4, IPv6 and ARP protocols, the MAC address check can be circumvented by using the broadcast MAC address.

Protocol-independent workarounds

The four following fixes will indistinctly drop IPv4, ARP and IPv6 packets.

Using VLAN-aware bridge

Linux 3.9 introduced the ability to use VLAN filtering on bridge ports. This can be used to prevent any local traffic:

# echo 1 > /sys/class/net/br0/bridge/vlan_filtering
# bridge vlan del dev br0 vid 1 self
# bridge vlan show
port    vlan ids
eth0     1 PVID Egress Untagged
eth2     1 PVID Egress Untagged
eth3     1 PVID Egress Untagged
eth4     1 PVID Egress Untagged
br0     None

This is the most efficient method since the frame is dropped directly in br_pass_frame_up().

Using ingress policy

It’s also possible to drop the bridged frame early after it has been re-delivered to netif_receive_skb() by br_pass_frame_up(). The ingress policy of an interface is evaluated before any handler. Therefore, the following commands will ensure no local delivery (the source interface of the packet is the bridge interface) happens:

# tc qdisc add dev br0 handle ffff: ingress
# tc filter add dev br0 parent ffff: u32 match u8 0 0 action drop

In my opinion, this is the second most efficient method.

Using ebtables

Just before re-delivering the frame to netif_receive_skb(), Netfilter gets a chance to issue a decision. It’s easy to configure it to drop the frame:

# ebtables -A INPUT --logical-in br0 -j DROP

However, to the best of my knowledge, this part of Netfilter is known to be inefficient.

Using namespaces

Isolation can also be obtained by moving all the bridged interfaces into a dedicated network namespace and configure the bridge inside this namespace:

# ip netns add bridge0
# ip link set netns bridge0 eth0
# ip link set netns bridge0 eth2
# ip link set netns bridge0 eth3
# ip link set netns bridge0 eth4
# ip link del dev br0
# ip netns exec bridge0 brctl addbr br0
# for i in 0 2 3 4; do
>    ip netns exec bridge0 brctl addif br0 eth$i
>    ip netns exec bridge0 ip link set up dev eth$i
> done
# ip netns exec bridge0 ip link set up dev br0

The frame will still wander a bit inside the IP stack, wasting some CPU cycles and increasing the possible attack surface. But ultimately, it will be dropped.

Protocol-dependent workarounds

Unless you require multiple layers of security, if one of the previous workarounds is already applied, there is no need to apply one of the protocol-dependent fix below. It’s still interesting to know them because it is not uncommon to already have them in place.

ARP

The easiest way to disable ARP processing on a bridge is to set the NOARP flag on the device. The ARP packet will be dropped as the very first step of the ARP handler.

# ip link set arp off dev br0
# ip l l dev br0
8: br0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 50:54:33:00:00:04 brd ff:ff:ff:ff:ff:ff

arptables can also drop the packet quite early:

# arptables -A INPUT -i br0 -j DROP

Another way is to set arp_ignore to 2 for the given interface. The kernel will only answer to ARP requests whose target IP address is configured on the incoming interface. Since the bridge interface doesn’t have any IP address, no ARP requests will be answered.

# sysctl -qw net.ipv4.conf.br0.arp_ignore=2

Disabling ARP processing is not a sufficient workaround for IPv4. A user can still insert the appropriate entry in its neighbor cache:

# ip neigh replace 192.168.14.3 lladdr 50:54:33:00:00:04 dev eth0
# ping -c 1 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=49 time=1.30 ms

--- 192.168.14.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.309/1.309/1.309/0.000 ms

As the check on the target MAC address is quite loose, they don’t even need to guess the MAC address:

# ip neigh replace 192.168.14.3 lladdr ff:ff:ff:ff:ff:ff dev eth0
# ping -c 1 192.168.14.3
PING 192.168.14.3 (192.168.14.3) 56(84) bytes of data.
64 bytes from 192.168.14.3: icmp_seq=1 ttl=49 time=1.12 ms

--- 192.168.14.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.129/1.129/1.129/0.000 ms

IPv4

The earliest place to drop an IPv4 packet is with Netfilter3:

# iptables -t raw -I PREROUTING -i br0 -j DROP

If Netfilter is disabled, another possibility is to enable strict reverse-path filtering for the interface. In this case, since there is no IP address configured on the interface, the packet will be dropped during the route lookup:

# sysctl -qw net.ipv4.conf.br0.rp_filter=1

Another option is the use of a dedicated routing rule. Compared to the reverse-path filtering option, the packet will be dropped a bit earlier, still during the route lookup.

# ip rule add iif br0 blackhole

IPv6

Linux provides a way to completely disable IPv6 on a given interface. The packet will be dropped as the very first step of the IPv6 handler:

# sysctl -qw net.ipv6.conf.br0.disable_ipv6=1

Like for IPv4, it’s possible to use Netfilter or a dedicated routing rule.

About the example

In the above example, the virtual host get ICMP replies because they are routed through the infrastructure network to Internet (e.g. the hypervisor has a default gateway which also acts as a NAT router to Internet). This may not be the case.

If you want to check if you are “vulnerable” despite not getting an ICMP reply, look at the guest neighbor table to check if you got an ARP reply from the host:

# ip route add 192.168.14.3/32 dev eth0
# ip neigh show dev eth0
192.168.14.3 lladdr 50:54:33:00:00:04 REACHABLE

If you didn’t get a reply, you could still have issues with IP processing. Add a static neighbor entry before checking the next step:

# ip neigh replace 192.168.14.3 lladdr ff:ff:ff:ff:ff:ff dev eth0

To check if IP processing is enabled, check the bridge host’s network statistics:

# netstat -s | grep "ICMP messages"
    15 ICMP messages received
    15 ICMP messages sent
    0 ICMP messages failed

If the counters are increasing, it is processing incoming IP packets.

One-way communication still allows a lot of bad things, like DoS attacks. Additionally, if the hypervisor happens to also act as a router, the reach is extended to the whole infrastructure network, potentially exposing weak devices (e.g. PDU) exposing an SNMP agent. If one-way communication is all that’s needed, the attacker can also spoof its source IP address, bypassing IP-based authentication.


  1. A frame can be forcibly routed (L3) instead of bridged (L2) by “brouting” the packet. This action can be triggered using ebtables

  2. For IPv6, reverse-path filtering needs to be implemented with Netfilter, using the rpfilter match

  3. If the br_netfilter module is loaded, net.bridge.bridge-nf-call-ipatbles sysctl has to be set to 0. Otherwise, you also need to use the physdev match to not drop IPv4 packets going through the bridge. 

12 April, 2017 07:58AM by Vincent Bernat

hackergotchi for Daniel Pocock

Daniel Pocock

What is the risk of using proprietary software for people who prefer not to?

Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used.

A question of leadership

Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype.

Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did?

When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future.

Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say.

It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it.

The many faces of proprietary software

One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer.

There is no need to give up

Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not.

I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all.

An orderly migration to free software

In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software.

On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor.

As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this.

No deal is better than a bad deal

People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog?

Burning bridges behind you

For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach?

Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future.

For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules?

Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.

12 April, 2017 06:43AM by Daniel.Pocock

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.13

Weblate 2.13 has been released today pretty much on the schedule. The most important change being more fine grained access control and some smaller UI improvements. There are other new features and bug fixes as well.

Full list of changes:

  • Fixed quality checks on translation templates.
  • Added quality check to trigger on losing translation.
  • Add option to view pending suggestions from user.
  • Add option to automatically build component lists.
  • Default dashboard for unauthenticated users can be configured.
  • Add option to browse 25 random strings for review.
  • History now indicates string change.
  • Better error reporting when adding new translation.
  • Added per language search within project.
  • Group ACLs can now be limited to certain permissions.
  • The per project ALCs are now implemented using Group ACL.
  • Added more fine grained privileges control.
  • Various minor UI improvements.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

12 April, 2017 06:30AM

April 11, 2017

hackergotchi for Joey Hess

Joey Hess

starting debug-me and a new devblog

I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.)

I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on. http://joeyh.name/devblog/

11 April, 2017 11:26PM

hackergotchi for Matthew Garrett

Matthew Garrett

Disabling SSL validation in binary apps

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target | grep --after=20 ,420 | grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

11 April, 2017 10:27PM

hackergotchi for Sean Whitton

Sean Whitton

coffeereddit

Most people I know can handle a single coffee per day, sometimes even forgetting to drink it. I never could understand how they did it. Talking about this with a therapist I realised that the problem isn’t necessary the caffeine, it’s my low tolerance of less than razor sharp focus. Most people accept they have slumps in their focus and just work through them. binarybear on reddit

11 April, 2017 09:08PM

hackergotchi for Riku Voipio

Riku Voipio

Deploying OBS

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions.

Things to learned while setting up OBS

Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.

Well done packaging

Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.

OBS does automatic rebuilds of reverse dependencies

Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.

OBS server and worker do http requests in both directions

On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..

Signing repositories is complicated

With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)

Git integration is rather bolted-on than integrated

OBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.

The upstream community is friendly

Including the happiest thanks from an upstream I've seen recently.

Summary

All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you.

*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.

11 April, 2017 08:23PM by Riku Voipio (noreply@blogger.com)

Antoine Beaupré

A report from Netconf: Day 2

This article covers the second day of the informal Netconf discussions, held on on April 4, 2017. Topics discussed this day included the binding of sockets in VRF, identification of eBPF programs, inconsistencies between IPv4 and IPv6, changes to data-center hardware, and more. (See this article for coverage from the first day of discussions).

How to bind to specific sockets in VRF

One of the first presentations was from David Ahern of Cumulus, who presented a few interesting questions for the audience. His first was the problem of binding sockets to a given interface. Right now, there are four different ways this can be done:

  • the old SO_BINDTODEVICE generic socket option (see socket(7))
  • the IP_PKTINFO, IP-specific socket option (see ip(7)), introduced in Linux 2.2
  • the IP_UNICAST_IF flag, introduced in Linux 3.3 for WINE
  • the IPv6 scope ID suffix, part of the IPv6 addressing standard

So there's a problem of having too many ways of doing the same thing, something that cannot really be fixed without breaking ABI compatibility. But even worse, conflicts between those options are not reported by the kernel so it's possible for a user to set up socket flags in a way that certain flags override others and there are no checks made or errors reported. It was agreed that the user should get some notification of conflicting changes here, at least.

Furthermore, binding sockets to a specific VRF (Virtual Routing and Forwarding) device is not currently possible, so Ahern asked what the best way to do this would be, considering the many options available. A use case example is a UDP multicast socket that could be bound to a specific interface within a VRF.

This is an old problem: Tom Herbert explained that there were previous discussions about making the bind() system call more programmable so that, for example, you could bind() a UDP socket to a discrete list of IP addresses or a subnet. So he identified this issue as a broader problem that should be addressed by making the interfaces more generic.

Ahern explained that it is currently possible to bind sockets to the slave device of a VRF even though that should not be allowed. He also raised the question of how the kernel should tell which socket should be selected for incoming packets. Right now, there is a scoring mechanism for UDP sockets, but that cannot be used directly in this more general case.

David Miller said that there are already different ways of specifying scope: there is the VRF layer and the namespace ("netns") layer. A long time ago, Miller reluctantly accepted the addition of netns keys everywhere, swallowing the performance cost to gain flexibility. He argued that a new key should not be added and instead existing infrastructure should be reused. Herbert argued this was exactly the reason why this should be simplified: "if we don't answer the question, people will keep on trying this". For example, one can use a VRF to limit listening addresses, but it gets complicated if we need a device for every address. It seems the consensus evolved towards using, IP_UNICAST_IF, added back in 2012, which is accessible for non-root users. It is currently limited to UDP and RAW sockets, but it could be extended for TCP.

XDP and eBPF program identification

Ahern then turned to the problem of extracting BPF programs from the kernel. He gave the example of a simple cBPF (classic BPF) filter that checks for ARP packets. If the filter is read back from the kernel, the user gets a blob of binary data, which is hard to interpret. There is an kernel verifier that can show C-like output, but that is also difficult to interpret. Ahern then added annotations to his slide that showed what the original program actually does, which was a good demonstration of why such a feature is needed.

Ahern explained that, at least for cBPF, it should be possible to recover the original plaintext, or at least something close to the original program. A first step would be to replace known constants (like 0x806 for ARP). Even with eBPF, it should be possible to improve the output. Alexei Starovoitov, the BPF maintainer, explained that it might make sense to start by returning information about the maps used by an eBPF program. Then more complex data structures could be inspected once we know their type.

The first priority is to get simple debugging tools working but, in the long term, the goal is a full decompiler that can reconstruct instructions into a human-readable program. The question that remains is how to return this data. Ahern explained that right now the bpf() system call copies the data to a different file descriptor, but it could just fill in a buffer. Starovoitov argued for a file descriptor; that would allow the kernel to stream everything through the same descriptor instead of having many attach points. Netlink cannot be used for this because of its asynchronous nature.

A similar issue regarding the way we identify express data path (XDP) programs (which are also written in BPF) was raised by Daniel Borkmann from Covalent. Miller explained that users will want ways to figure out which XDP program was installed, so XDP needs an introspection mechanism. We currently have SHA-1 identifiers that can be internally used to tell which binary is currently loaded but those are not exposed to user space. Starovoitov mentioned it is now just a boolean that shows if a program is loaded or not.

A use case for this, on top of just trying to figure out which BPF program is loaded, is to actually fetch the source code of a BPF program that was deployed in the field for which the source was lost. It is still uncertain that it will be possible to extract an exact copy that could then be recompiled into the same program. Starovoitov added that he needed this in production to do proper reporting.

IPv4/IPv6 equivalency

The last issue — or set of issues — that Ahern brought up was the question of inconsistencies between IPv4 and IPv6. It turns out that, because both protocols were (naturally) implemented separately, there are inconsistencies in how they are handled in the Linux kernel, which affect, among other things, the VRF framework. The first example he gave was the fact that IPv6 addresses added on the loopback interface generate unreachable routes in the main routing table, yet this doesn't happen with IPv4 addresses. Hannes Frederic Sowa explained this was part of the IPv6 specification: there are stronger restrictions on loopback interfaces in IPv6 than IPv4. Ahern explained that VRF loopback interfaces do not implement these restrictions and wanted to know if this was a problem.

Another issue is that anycast routes are added to the wrong interface. This is apparently not specific to VRF: this was done "just because Java", and has been there from day one. It seems that the Java Virtual Machine builds its own routing table and assumes this behavior, so changing this would break every JVM out there, which is obviously not acceptable.

Finally, Martin Kafai Lau asked if work should be done to merge the IPv4 and IPv6 FIB (forwarding information base) trees. The FIB tree is the data structure that represents routing tables in the Linux kernel. Miller explained that the two trees are not semantically equivalent: while IPv6 does source-address lookup and routing, IPv4 does not. We can't remove the source lookups from IPv6, because "people probably use that". According to Alexander Duyck, adding source tables to IPv4 would degrade performance to the level of IPv6 performance, which was jokingly referred to as an incentive to switch to IPv6.

More seriously, Sowa argued that using the same compressed tree IPv4 uses in IPv6 could make sense. People may want to have source routing in IPv4 as well. Miller argued that the kernel is optimized for 32-bit addresses in IPv4, and conceded that it could be scaled to 64-bit subnets, but 128-bit addresses would be much harder. Sowa suggested that they could be limited to 64 bits, as global routes that are announced over BGP usually have such a limit, and more specific routes are usually at discrete prefixes like /65, /127 (for interconnect links) or /128 for (for point-to-point links). He expressed concerns over the reliability of such an implementation so, at this point, it is unlikely that the data structures could be merged. What is more likely is that the code path could be merged and simplified, while keeping the data structures separate.

Modules options substitutions

The next issue that was raised was from Jiří Pírko, who asked how to pass configuration options to a driver before the driver is initialized. Some chips require that some settings be sent before the firmware is loaded, which leads to a weird situation where there is a need to address a device before it's actually recognized by the kernel. The question then can be summarized as to how to pass information to a device that doesn't exist yet.

The answer seems to be that devlink could do this, as it has access to the full device tree and, therefore, to devices that can be addressed by (say) PCI identifiers. Then a possible devlink command could look something like:

    devlink dev pci/0000:03:00.0 option set foo bar

This idea raised a bunch of extra questions: some devices don't have a one-to-one mapping with the PCI bridge identifiers, for example, meaning that those identifiers cannot be used to access such devices. Another issue is that you may want to send multiple settings in a single transaction, which doesn't fit well in the devlink model. Miller then proposed to let the driver initialize itself to some state and wait for configuration to be sent when necessary. Another way would be to unregister the driver and re-register with the given configuration. Shrijeet Mukherjee explained that right now, Cumulus is doing this using horrible startup script magic by retrying and re-registering, but it would be nice to have a more standard way to do this.

Control over UAPI patches

Another issue that came up was the problem of changes in the user-space API (UAPI) which break backward compatibility. Pírko said that "we have to be more careful about those changes". The problem is that reviewers are not always available to make detailed reviews of such changes and may not notice API-breaking changes. Pírko proposed creating a bot to check if a given patch introduces UAPI changes, changes in structs, or in netlink enums. Miller said he could block merges until discussions happen and that patchwork, which Miller uses to process patches from the mailing list, does some of this. He also pointed out there aren't enough test cases in the first place.

Starovoitov argued UAPI isn't special, there are other ways of breaking backward compatibility. He expressed concerns that such a bot could create a false sense that everything is fine while a patch could break compatibility and not be detected. Miller countered that UAPI is special in that "we're stuck with it forever". He then went on to propose that, since there's a maintainer (or more) for each module, he can make sure that each maintainer explicitly approves changes to those modules.

Data-center hardware changes

Starovoitov brought up the issue of a new type of hardware that is currently being deployed in data centers called a "multi-host NIC" (network interface card). It's a single NIC that is connected to multiple servers. Facebook, for example, uses this in its Yosemite platform that shoves twelve servers into a 2U rack mount, in three modules. Each module is made of four servers connected to the traditional switch fabric with a single NIC through PCI-Express. Mellanox and and Broadcom also have similar devices.

One question is how to manage those devices. Since they are connected through a PCI-Express bus, Linux will see them as a NIC, yet they are also a little like switches, in that they interconnect multiple servers. Furthermore, the kernel security model assumes that a NIC is trusted, and gladly opens its own memory to NICs through DMA; this can become a huge security issue when the NIC is under the control of another server. This can especially become problematic if we consider that there could be TLS hardware offloading in the future with the introduction of in-kernel TLS stacks.

The other problem is the question of reliability: since those devices are currently "dumb", they need to be managed just like a regular NIC. If the host managing the card crashes, it could disable a whole set of servers that rely on the same NIC. There could be an election process among the servers, but that complicates significantly what used to be a simple PCI connection.

Mukherjee pointed out that the model Cisco uses for this is that the "smart NIC" is a "slave" of the main switch fabric. It's a daughter card, which makes it easier to manage from a network perspective. It is clear that Linux will need a way to represent those devices, probably through the newly introduced switchdev or DSA (distributed switch architecture), but it will be something to keep an eye on as density increases in the data center.

There were many more discussions during Netconf, too many to cover here, but in the end, Miller thanked everyone for all the interesting topics as the participants dispersed for a day off to travel to Montreal to attend the following Netdev conference.

The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Alexei Starovoitov for his time taken for a technical review of this article.

Note: this article first appeared in the Linux Weekly News.

11 April, 2017 05:00PM

A report from Netconf: Day 1

As is becoming traditional, two times a year the kernel networking community meets in a two-stage conference: an invite-only, informal, two-day plenary session called Netconf, held in Toronto this year, and a more conventional one-track conference open to the public called Netdev. I was invited to cover both conferences this year, given that Netdev was in Montreal (my hometown), and was happy to meet the crew of developers that maintain the network stack of the Linux kernel.

This article covers the first day of the conference which consisted of around 25 Linux developers meeting under the direction of David Miller, the kernel's networking subsystem maintainer. Netconf has no formal sessions; although some people presented slides, interruptions are frequent (indeed, encouraged) and the focus is on hashing out issues that are blocked on the mailing list and getting suggestions, ideas, solutions, and feedback from their peers.

Removing ndo_select_queue()

One of the first discussions that elicited a significant debate was the ndo_select_queue() function, a key component of the Linux polling system that determines when and how to send packets on a network interface (see netdev_pick_tx and friends). The general question was whether the use of ndo_select_queue() in drivers is a good idea. Alexander Duyck explained that Intel people were considering using ndo_select_queue() for receive/transmit queue matching. Intel drivers do not currently use the hook provided by the Linux kernel and it turns out no one is happy with ndo_select_queue(): the heuristics it uses don't really please anyone. The consensus (including from Duyck himself) seemed to be that it should just not be used anymore, or at least not used for that specific purpose.

The discussion turned toward the wireless network stack, which uses it extensively, but for other purposes. Johannes Berg explained that the wireless stack uses ndo_select_queue() for traffic classification, for example to get voice traffic through even if the best-effort queue is backed up. The wireless stack could stop using it by doing flow control completely inside the wireless stack, which already uses the fq_codel flow-control mechanism for other purposes, so porting away from ndo_select_queue() seems possible there.

The problem then becomes how to update all the drivers to change that behavior, which would be a lot of work. Still, it seems people are moving away from a generic ndo_select_queue() interface to stack-specific or even driver-specific (in the case of Intel) queue management interfaces.

refcount_t followup

There was a followup discussion on the integration of the refcount_t type into the network stack, which we covered recently. This type is meant to be an in-kernel defense against exploits based on overflowing or underflowing an object's reference count.

The consensus seems to be that having refcount_t used for debugging is acceptable, but it cannot be enabled by default. An issue that was identified is that the networking developers are fairly sure that introducing refcount_t would have a severe impact on performance, but they do not have benchmarks to prove it, something Miller identified as a problem that needs to be worked on. Miller then expressed some openness to the idea of having it as a kernel configuration option.

A similar discussion happened, on the second day, regarding the KASan memory error detector which was covered when it was introduced in 2014. Eric Dumazet warned that there could be a lot of issues that cannot be detected by KASan because of the way the network stack often bypasses regular memory-allocation routines for performance reasons. He also noted that this can sometimes mean the stack may go over the regular 10% memory limit (the tcp_mem parameter, described in the tcp(7) man page) for certain operations, especially when rebuilding out of order packets with lots of parallel TCP connections.

Therefore it was proposed that these special memory recycling tricks could be optionally disabled, at run or compile-time, to instrument proper memory tracking. Dumazet argued this was a situation similar to refcount_t in that we need a way to disable high performance to make the network stack easier to debug with KAsan.

The problem with optional parameters is that they are often disabled in production or even by default, which, in turn, means that critical bugs cannot actually be found because the code paths are not tested. When I asked Dumazet about this, he explained that Google performs integration testing of new kernels before putting them in production, and those toggles could be enabled there to find and fix those bugs. But he agreed that certain code paths are then not tested until the code gets deployed in production.

So it seems the status quo remains: security folks wants to improve the reliability of the kernel, but the network folks can't afford the performance cost. Yet it was clear in the discussions that the team cares about security issues and wants those issues to be fixed; the impact of some of the solutions is just too big.

Lightweight wireless management packet access

Berg explained that some users need to have high-performance access to certain management frames in the wireless stack and wondered how to best expose those to user space. The wireless stack already allows users to clone a network interface in "monitor" mode, but this has a big performance cost, as the radiotap header needs to be constructed from scratch and the packet header needs to be copied. As wireless improves and the bandwidth rises to gigabit levels, this can become significant bottleneck for packet sniffers or reporting software that need to know precisely what's going on over the air outside of the regular access point client operation.

It seems the proper way to do this is with an eBPF program. As Miller summarized, just add another API call that allows loading a BPF program into the kernel and then those users can use a BPF filtering point to get the statistics they need. This will require an extra hook in the wireless stack, but it seems like this is the way that will be taken to implement this feature.

VLAN 0 inconsistencies

Hannes Frederic Sowa brought up the seemingly innocuous question of "how do we handle VLAN 0?" In theory, VLAN 0 means "no VLAN". But the Linux kernel currently handles this differently depending on whether the VLAN module is loaded and whether a VLAN 0 interface was created. Sometimes the VLAN tag is stripped, sometimes not.

It turns out the semantics of this were accidentally changed last time there was a change here and this was originally working but is now broken. Sowa therefore got the go-ahead to fix this to make the behavior consistent again.

Loopy fun

Then it came the turn of Jamal Hadi Salim, the maintainer of the kernel's traffic-control (tc) subsystem. The first issue he brought up is a problem in the tc REDIRECT action that can create infinite loops within the kernel. The problem can be easily alleviated when loops are created on the same interface: checks can be added that just drop packets coming from the same device and rate-limit logging to avoid a denial-of-service (DoS) condition.

The more serious problem occurs when a packet is forwarded from (say) interface eth0 to eth1 which then promptly redirects it from eth1 back to eth0. Obviously, this kind of problem can only be created by a user with root access so, at first glance, those issues don't seem that serious: admins can shoot themselves in the foot, so what?

But things become a little more serious when you consider the container case, where an untrusted user has root access inside a container and should have constrained resource limitations. Such a loop could allow this user to deploy an effective DoS attack against a whole group of containers running on the same machine. Even worse, this endless loop could possibly turn into a deadlock in certain scenarios, as the kernel could try to transmit the packet on the same device it originated from and block, progressively filling the queues and eventually completely breaking network access. Florian Westphal argued that a container can already create DoS conditions, for example by doing a ping flood.

According to Salim, this whole problem was created when two bits used for tracking such packets were reclaimed from the skb structure used to represent packets in the kernel. Those bits were a simple TTL (time to live) field that was incremented on each loop and dropped after a pre-determined limit was reached, breaking infinite loops. Salim asked everyone if this should be fixed or if we should just forget about this issue and move on.

Miller proposed to keep a one-behind state for the packet, fixing the simplest case (two interfaces). The general case, however, would requite a bitmap of all the interfaces to be scanned, which would impose a large overhead. Miller said an attempt to fix this should somehow be made. The root of the problem is that the network maintainers are trying to reduce the size of the skb structure, because it's used in many critical paths of the network stack. Salim's position is that, without the TTL fields, there is no way to fix the general case here, and this constitutes a security issue. So either the bits need to be brought back, or we need to live with the inherent DoS threat.

Dumping large statistics sets

Another issue Salim brought up was the question of how to export large statistics sets from the kernel. It turns out that some use cases may end up dumping a lot of data. Salim mentioned a real-world tc use case that calls for reading six-million entries. The current netlink-based API provides a way to get only 20 entries at a time, which means it takes forever to dump the state of all those policy actions. Salim has a patch that changes the dump size be eight times the NLMSG_GOOD_SIZE, which improves performance by an order of magnitude already, although there are issues with checking the user-space buffer size there.

But a more complete solution is needed. What Salim proposed was a way to ask only for the states that changed since the last dump was requested. He has a patch to add a last_access field to the netlink_callback structure used by netlink_dump() to output data; that raised the question of how to actually use that field. Since Salim fetches that data every five seconds, he figured he could just tell the kernel to return all the nodes that changed in that period. But then if the dump takes more than five seconds to complete, the next dump may be missing states that changed during the extra delay. An alternative mechanism would be for the user-space utility to keep the time stamp it requested and use that as a delta for the next dump.

It turns out this is a larger problem than just tc. Dumazet mentioned this was an issue with fq_codel classes: he would even like to be able to dump those statistics faster than every five seconds. Roopa Prabhu mentioned that Cumulus also has similar problems dumping stats from bridges, so clearly a more generic solution is needed here. There is, however, a fundamental problem with dumping large statistics sets from the kernel: those statistics are constantly changing while the dump is created and unless versioning or locking mechanisms are used — which would slow things down — the data returned is bound to be only an approximation of reality. Salim promised to send a set of RFC patches to further discussions regarding this issue, but during the following Netdev conference, Berg published a patch to fix this ten-year-old issue, which brought cheers from the audience.

The author would like to thank the Netconf and Netdev organizers for travel to, and hosting assistance in, Toronto. Many thanks to Berg, Dumazet, Salim, and Sowa for their time taken for a technical review of this article.

Note: this article first appeared in the Linux Weekly News.

11 April, 2017 05:00PM

Reproducible builds folks

Reproducible Builds: week 102 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday April 2 and Saturday April 8 2017:

Media coverage

Toolchain development and fixes

Reviews of unreproducible packages

27 package reviews have been added, 14 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Aaron M. Ucko (1)
  • Adrian Bunk (1)
  • Chris Lamb (2)

tests.reproducible-builds.org

Misc.

This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

11 April, 2017 07:27AM

April 10, 2017

hackergotchi for Daniel Pocock

Daniel Pocock

If Alan Turing was born today, would he be a Muslim?

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).


Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.

While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.

Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.

Please prove me wrong

In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.

Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".

What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.

The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.

One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.

In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.

A deeper insight into Turing's fate

With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).

In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.

In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.

It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?


Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:

Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?

It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?

Playing ping-pong with children

If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.

When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?

How do British politicians really view migrants?

Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.

According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.

While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.

Who is Britain's next Alan Turing?

Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.

Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.

Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

10 April, 2017 08:01PM by Daniel.Pocock

hackergotchi for Michal Čihař

Michal Čihař

New free software projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. Finally I got to processing requests a bit faster, so there are just few new projects.

This time, the newly hosted projects include:

  • Pext - Python-based extendable tool
  • Dino - modern Jabber/XMPP Client using GTK+/Vala

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English Weblate | 0 comments

10 April, 2017 10:00AM

April 09, 2017

Enrico Zini

Ansible config for my stereo

I bought a Raspberry Pi 2 and its case. I could not reuse the existing SD card because it wants a MicroSD.

A wise person once told me:

First you do it, then you document it, then you automate it.

I had done the first two, and now I've redone the whole setup with ansible, here: stereo.tar.xz.

Hifi with a Raspberry Pi 2 and its case

09 April, 2017 06:54PM

Sam Hartman

When "when" is too hard a question: SQLAlchemy, Python datetime, and ISO8601

A new programmer asked on a work chat room how timezones are handled in databases. He asked if it was a good idea to store things in UTC. The senior programmers all laughed as we told some of our horror stories with timezones. Yes, UTC is great; if only it were that simple.
About a week later I was designing the schema for a blue sky project I'm implementing. I had to confront time in all its Pythonic horror.
Let's start with the datetime.datetime class. Datetime objects optionally include a timezone. If no timezone is present, several methods such as timestamp treat the object as a local time in the system's timezone. The timezone method returns a POSIX timestamp, which is always expressed in UTC, so knowing the input timezone is important. The now method constructs such an object from the current time.
However other methods act differently. The utcnow method constructs a datetime object that has the UTC time, but is not marked with a timezone. So, for example datetime.fromtimestamp(datetime.utcnow().timestamp()) produces the wrong result unless your system timezone happens to have the same offset as UTC.
It's also possible to construct a datetime object that includes a UTC time and is marked as having a UTC time. The utcnow method never does this, but you can pass the UTC timezone into the now method and get that effect. As you'd expect, the timestamp method returns the correct result on such a datetime.
Now enter SQLAlchemy, one of the more popular Python ORMs. Its DATETIME type has an argument that tries to request a column capable of storing a a timezone from the underlying database. You aren't guaranteed to get this though; some databases don't provide that functionality. With PostgreSQL, I do get such a column, although something in SQLAlchemy is not preserving the timezones (although it is correctly adjusting the time). That is, I'll store a UTC time in an object, flush it to my session, and then read back the same time represented in my local timezone (marked as my local timezone). You'd think this would be safe.
Enter SQLite. SQLite makes life hard for people wanting to store time; it seems to want to store things as strings. That's fairly incompatible with storing a timezone and doing any sort of comparisons on dates. SQLAlchemy does not try to store a timezone in SQLite. It just trims any timezone information from the datetime. So, if I do something like
d = datetime.now(timezone.utc)
obj.date_col = d
session.add(obj)
session.flush()
assert obj.date_col == d # fails
assert obj.date_col.timestamp() == d.timestamp() # fails
assert d == obj.date_col.replace(tzinfo = timezone.utc) # finally succeeds

There are some unfortunate consequences of this. If you mark your datetimes with timezone information (even if it is always the same timezone), whether two datetimes representing the same datetime compare equal depends on whether objects have been flushed to the session yet. If you don't mark your objects with timezones, then you may not store timezone information on other databases.
At least if you use only the methods we've discussed so far, you're reasonably safe if you use local time everywhere in your application and don't mark your datetimes with timezones. That's undesirable because as our new programmer correctly surmised, you really should be using UTC. This is particularly true if users of your database might span multiple timezones.
You can use UTC time and not mark your objects as UTC. This will give the wrong data with a database that actually does support timezones, but will sort of work with SQLite. You need to be careful never to convert your datetime objects into POSIX time as you'll get the wrong result.
It turns out that my life was even more complicated because parts of my project serialize data into JSON. For that serialization, I've chosen ISO 8601. You've probably seen that format: '2017-04-09T18:17:27.340410+00:00. Datetime provides the convenient isoformat method to print timestamps in the ISO 8601 format. If the datetime has a timezone indication, it is included in the ISO formatted string. If not, then no timezone indication is included. You know how I mentioned that datetime takes a string without a timezone marker as local time? Yeah, well, that's not what 8601 does: UTC all the way, baby! And at least the parser in the iso8601 module will always include timezone markers. So, if you use datetime to print a timestamp without a timezone marker and then read that back in to construct a new datetime on the deserialization side, then you'll get the wrong time. OK, so mark things with timezones then. Well, if you use local time, then the time you get depends on whether you print the ISO string before or after session flush (before or after SQLAlchemy trims the timezone information as it goes to SQLite).
It turns out that I had the additional complication of one side of my application using SQLite and one side using PostgreSQL. Remember how I mentioned that something between SQLAlchemy and PostgreSQL was recasting my times in local timezone (although keeping the time the same)? Well, consider how that's going to work. I serialize with the timezone marker on the PostgreSQL side. I get a ISO8601 localtime marked with the correct timezone marker. I deserialize on the SQLite side. Before session flush, I get a local time marked as localtime. After session flush, I get a local time with no marking. That's bad. If I further serialize on the SQLite side, I'll get that local time incorrectly marked as UTC. Moreover, all the times being locally generated on the SQLite side are UTC, and as we explored, SQLite really only wants one timezone in play.
I eventually came up with the following approach:

  1. If I find myself manipulating a time without a timezone marking, assert that its timezone is UTC not localtime.

  2. Always use UTC for times coming into the system.

  3. If I'm generating an ISO 8601 time from a datetime that has a timezone marker in a timezone other than UTC, represent that time as a UTC-marked datetime adjusting the time for the change in timezone.


This is way too complicated. I think that both datetime and SQLAlchemy's SQLite time handling have a lot to answer for. I think SQLAlchemy's core time handling may also have some to answer for, but I'm less sure of that.

09 April, 2017 06:39PM

Antoine Beaupré

Contribute your skills to Debian in Montreal, April 14 2017

Join us in Montreal, on April 14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).

Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.

The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome :)

Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.

Here's what we will be doing:

  • We will triage bug reports that are blocking the release of the upcoming version of Debian.

  • Debian package maintainers will fix some of these bugs.

Goals and principles

This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.

We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.

We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting the Debian Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after.

The space where this event will take place is unfortunately not accessible to wheelchairs. Food (including vegetarian options) should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.

What we will be doing

This will be an informal session to confirm and fix bugs in Debian. If you have never worked with Debian packages, this is a good opportunity to learn about packaging and bugtracker usage.

Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.

We will also try to actually fix bugs by testing patches and uploading fixes into Debian itself. Antoine Beaupré, a seasoned Debian developer, will be available to sponsor uploads and teach people about basic Debian packaging skills.

Where? When? How to register?

See https://wiki.debian.org/BSP/2017/04/ca/Montreal for the exact address and time.

09 April, 2017 03:06PM

hackergotchi for Christoph Egger

Christoph Egger

Secured OTP Server (ASIS CTF 2017)

This weekend was ASIS Quals weekend again. And just like last year they have quite a lot of nice crypto-related puzzles which are fun to solve (and not "the same as every ctf").

Actually Secured OTP Server is pretty much the same as the First OTP Server (actually it's a "fixed" version to enforce the intended attack). However the template phrase now starts with enough stars to prevent simple root.:

def gen_otps():
    template_phrase = '*************** Welcome, dear customer, the secret passphrase for today is: '

    OTP_1 = template_phrase + gen_passphrase(18)
    OTP_2 = template_phrase + gen_passphrase(18)

    otp_1 = bytes_to_long(OTP_1)
    otp_2 = bytes_to_long(OTP_2)

    nbit, e = 2048, 3
    privkey = RSA.generate(nbit, e = e)
    pubkey  = privkey.publickey().exportKey()
    n = getattr(privkey.key, 'n')

    r = otp_2 - otp_1
    if r < 0:
        r = -r
    IMP = n - r**(e**2)
    if IMP > 0:
        c_1 = pow(otp_1, e, n)
        c_2 = pow(otp_2, e, n)
    return pubkey, OTP_1[-18:], OTP_2[-18:], c_1, c_2

Now let A = template * 2^(18*8), B = passphrase. This results in OTP = A + B. c therefore is (A+B)^3 mod n == A^3 + 3A^2b + 3AB^2 + B^3. Notice that only B^3 is larger than N and is statically known. Therefore we can calculate A^3 // N and add that to c to "undo" the modulo operation. With that it's only iroot and long_to_bytes to the solution. Note that we're talking about OTP and C here. The code actually produced two OTP and C values but you can use either one just fine.

#!/usr/bin/python3

import sys
from util import bytes_to_long
from gmpy2 import iroot

PREFIX = b'*************** Welcome, dear customer, the secret passphrase for today is: '
OTPbase = bytes_to_long(PREFIX + b'\x00' * 18)

N = 27990886688403106156886965929373472780889297823794580465068327683395428917362065615739951108259750066435069668684573174325731274170995250924795407965212988361462373732974161447634230854196410219114860784487233470335168426228481911440564783725621653286383831270780196463991259147093068328414348781344702123357674899863389442417020336086993549312395661361400479571900883022046732515264355119081391467082453786314312161949246102368333523674765325492285740191982756488086280405915565444751334123879989607088707099191056578977164106743480580290273650405587226976754077483115441525080890390557890622557458363028198676980513

WRAPPINGS = (OTPbase ** 3) // N

C = 13094996712007124344470117620331768168185106904388859938604066108465461324834973803666594501350900379061600358157727804618756203188081640756273094533547432660678049428176040512041763322083599542634138737945137753879630587019478835634179440093707008313841275705670232461560481682247853853414820158909864021171009368832781090330881410994954019971742796971725232022238997115648269445491368963695366241477101714073751712571563044945769609486276590337268791325927670563621008906770405196742606813034486998852494456372962791608053890663313231907163444106882221102735242733933067370757085585830451536661157788688695854436646

x = N * WRAPPINGS + C

val, _ = iroot(x, 3)
bstr = "%x" % int(val)

for i in range(0, len(bstr) // 2):
    sys.stdout.write(chr(int(bstr[2*i:2*i+2], 16)))

print()

09 April, 2017 01:20PM

Michael Stapelberg

manpages.debian.org: what’s new since the launch?

On 2017-01-18, I announced that https://manpages.debian.org had been modernized. Let me catch you up on a few things which happened in the meantime:

  • Debian experimental was added to manpages.debian.org. I was surprised to learn that adding experimental only required 52MB of disk usage. Further, Debian contrib was added after realizing that contrib licenses are compatible with the DFSG.
  • Indentation in some code examples was fixed upstream in mandoc.
  • Address-bar search should now also work in Firefox, which apparently requires a title attribute on the opensearch XML file reference.
  • manpages now specify their language in the HTML tag so that search engines can offer users the most appropriate version of the manpage.
  • I contributed mandocd(8) to the mandoc project, which debiman now uses for significantly faster manpage conversion (useful for disaster recovery/development). An entire run previously took 2 hours on my workstation. With this change, it takes merely 22 minutes. The effects are even more pronounced on manziarly, the VM behind manpages.debian.org.
  • Thanks to Peter Palfrader (weasel) from the Debian System Administrators (DSA) team, manpages.debian.org is now serving its manpages (and most of its redirects) from Debian’s static mirroring infrastructure. That way, planned maintenance won’t result in service downtime. I contributed README.static-mirroring.txt, which describes the infrastructure in more detail.

The list above is not complete, but rather a selection of things I found worth pointing out to the larger public.

There are still a few things I plan to work on soon, so stay tuned :).

09 April, 2017 11:23AM

hackergotchi for Matthew Garrett

Matthew Garrett

A quick look at the Ikea Trådfri lighting platform

Ikea recently launched their Trådfri smart lighting platform in the US. The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent. Hardware-wise, the device is pretty minimal - it seems to be based on the Cypress[1] WICED IoT platform, with 100MBit ethernet and a Silicon Labs Zigbee chipset. It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports. As IoT devices go, it's pleasingly minimal.

That single port seems to be a COAP server running with DTLS and a pre-shared key that's printed on the bottom of the device. When you start the app for the first time it prompts you to scan a QR code that's just a machine-readable version of that key. The Android app has code for using the insecure COAP port rather than the encrypted one, but the device doesn't respond to queries there so it's presumably disabled in release builds. It's also local only, with no cloud support. You can program timers, but they run on the device. The only other service it seems to run is an mdns responder, which responds to the _coap._udp.local query to allow for discovery.

From a security perspective, this is pretty close to ideal. Having no remote APIs means that security is limited to what's exposed locally. The local traffic is all encrypted. You can only authenticate with the device if you have physical access to read the (decently long) key off the bottom. I haven't checked whether the DTLS server is actually well-implemented, but it doesn't seem to respond unless you authenticate first which probably covers off a lot of potential risks. The SoC has wireless support, but it seems to be disabled - there's no antenna on board and no mechanism for configuring it.

However, there's one minor issue. On boot the device grabs the current time from pool.ntp.org (fine) but also hits http://fw.ota.homesmart.ikea.net/feed/version_info.json . That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal. Realistically, this is only a problem if someone already has enough control over your network to mess with your DNS, and being wired-only makes this pretty unlikely. I'd be surprised if it's ever used as a real avenue of attack.

Overall: as far as design goes, this is one of the most secure IoT-style devices I've looked at. I haven't examined the COAP stack in detail to figure out whether it has any exploitable bugs, but the attack surface is pretty much as minimal as it could be while still retaining any functionality at all. I'm impressed.

[1] Formerly Broadcom

comment count unavailable comments

09 April, 2017 12:16AM