June 27, 2017

hackergotchi for Joey Hess

Joey Hess

12 to 24 volt house conversion

Upgrading my solar panels involved switching the house from 12 volts to 24 volts. No reasonably priced charge controllers can handle 1 KW of PV at 12 volts.

There might not be a lot of people who need to do this; entirely 12 volt offgrid houses are not super common, and most upgrades these days probably involve rooftop microinverters and so would involve a switch from DC to AC. I did not find a lot of references online for converting a whole house's voltage from 12V to 24V.

To prepare, I first checked that all the fuses and breakers were rated for > 24 volts. (Actually, > 30 volts because it will be 26 volts or so when charging.) Also, I checked for any shady wiring, and verified that all the wires I could see in the attic and wiring closet were reasonably sized (10AWG) and in good shape.

Then I:

  1. Turned off every light, unplugged every plug, pulled every fuse and flipped every breaker.
  2. Rewired the battery bank from 12V to 24V.
  3. Connected the battery bank to the new charge controller.
  4. Engaged the main breaker, and waited for anything strange.
  5. Screwed in one fuse at a time.

lighting

The house used all fluorescent lights, and they have ballasts rated for only 12V. While they work at 24V, they might blow out sooner or overheat. In fact one died this evening, and while it was flickering before, I suspect the 24V did it in. It makes sense to replace them with more efficient LED lights anyway. I found some 12-24V DC LED lights for regular screw-in (edison) light fixtures. Does not seem very common; Amazon only had a few models and they shipped from China.

Also, I ordered a 15 foot long, 300 LED strip light, which runs on 24V DC and has an adhesive backing. Great stuff -- it can be cut to different lengths and stuck anywhere. I installed some underneath the cook stove hood and the kitchen cabinets, which didn't have lights before.

Similar LED strips are used in some desktop lamps. My lamp was 12V only (barely lit at 24V), but I was able to replace its LED strip, upgrading it to 24V and three times as bright.

(Christmas lights are another option; many LED christmas lights run on 24V.)

appliances

My Lenovo laptop's power supply that I use in the house is a vehicle DC-DC converter, and is rated for 12-24V. It seems to be running fine at 26V, did not get warm even when charging the laptop up from empty.

I'm using buck converters to run various USB powered (5V) ARM boxes such as my sheevaplug. They're quarter sized, so fit anywhere, and are very efficient.

My satellite internet receiver is running with a large buck converter, feeding 12V to an inverter, feeding to a 30V DC power supply. That triple conversion is inneficient, but it works for now.

The ceiling fan runs on 24V, and does not seem to run much faster than on 12V. It may be rated for 12-24V. Can't seem to find any info about it.

The radio is a 12V car radio. I used a LM317 to run it on 24V, to avoid the RF interference a buck converter would have produced. This is a very inneficient conversion; half of the power is wasted as heat. But since I can stream internet radio all day now via satellite, I'll not use the FM radio very often.

Fridge... still running on propane for now, but I have an idea for a way to build a cold storage battery that will use excess power from the PV array, and keep a fridge at a constant 34 degrees F. Next home improvement project in the queue.

27 June, 2017 03:44AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Learning to Code in One’s Own Language

I recently published a paper with Sayamindu Dasgupta that provides evidence in support of the idea that kids can learn to code more quickly when they are programming in their own language.

Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

Examples of a short piece of Scratch code shown in four different human languages: English, Italian, Norwegian Bokmål, and German.

Although my research with Sayamindu Dasgupta focuses on learning, both Sayamindu and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

Summary of the results of our model for two prototypical individuals.

Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.


This blog post and the work that it describes is a collaborative project with Sayamindu Dasgupta. Sayamindu also published a very similar version of the blog post in several places. Our paper is open access and you can read it here. The paper was published in the proceedings of the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

27 June, 2017 01:00AM by Benjamin Mako Hill

June 26, 2017

Niels Thykier

debhelper 10.5.1 now available in unstable

Earlier today, I uploaded debhelper version 10.5.1 to unstable.  The following are some highlights compared to version 10.2.5:

  • debhelper now supports the “meson+ninja” build system. Kudos to Michael Biebl.
  • Better cross building support in the “makefile” build system (PKG_CONFIG is set to the multi-arched version of pkg-config). Kudos to Helmut Grohne.
  • New dh_missing helper to take over dh_install –list-missing/–fail-missing while being able to see files installed from other helpers. Kudos to Michael Stapelberg.
  • dh_installman now logs what files it has installed so the new dh_missing helper can “see” them as installed.
  • Improve documentation (e.g. compare and contrast the dh_link config file with ln(1) to assist people who are familiar with ln(1))
  • Avoid triggering a race-condition with libtool by ensuring that dh_auto_install run make with -j1 when libtool is detected (see Debian bug #861627)
  • Optimizations and parallel processing (more on this later)

There are also some changes to the upcoming compat 11

  • Use “/run” as “run state dir” for autoconf
  • dh_installman will now guess the language of a manpage from the path name before using the extension.

 


Filed under: Debhelper, Debian

26 June, 2017 09:59PM by Niels Thykier

hackergotchi for Joey Hess

Joey Hess

DIY solar upgrade complete-ish

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours.

charge controller reading 66.1V at 3.4 amps on panels, charging battery at 29.0V at 7.6A

Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

26 June, 2017 09:44PM

hackergotchi for Alexander Wirt

Alexander Wirt

Stretch Backports available

With the release of stretch we are pleased to open the doors for stretch-backports and jessie-backports-sloppy. \o/

As usual with a new release we will change a few things for the backports service.

What to upload where

As a reminder, uploads to a release-backports pocket are to be taken from release + 1, uploads to a release-backports-sloppy pocket are to be taken from release + 2. Which means:

Source Distribution Backports Distribution Sloppy Distribution
buster stretch-backports jessie-backports-sloppy
stretch jessie-backports -

Deprecation of LTS support for backports

We started supporting backports as long as there is LTS support as an experiment. Unfortunately it didn’t worked, most maintainers didn’t wanted to support oldoldstable-backports (squeeze) for the lifetime of LTS. So things started to rot in squeeze and most packages didn’t received updates. After long discussions we decided to deprecate LTS support for backports. From now on squeeze-backports(-sloppy) is closed and will not receive any updates. Expect it to get removed from the mirrors and moved to archive in the near future.

BSA handling

We - the backports team - didn’t scale well in processing BSA requests. To get things better in the future we decided to change the process a little bit. If you upload a package which fixes security problems please fill out the BSA template and create a ticket in the rt tracker (see https://backports.debian.org/Contribute/#index3h2 for details).

Stretching the rules

From time to time its necessary to not follow the backports rules, like a package needs to be in testing or a version needs to be in Debian. If you think you have one of those cases, please talk to us on the list before upload the package.

Thanks

Thanks have to go out to all people making backports possible, and that includes up front the backporters themself who do upload the packages, track and update them on a regular basis, but also the buildd team making the autobuilding possible and the ftp masters for creating the suites in the first place.

We wish you a happy stretch :) Alex, on behalf of the Backports Team

26 June, 2017 09:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Coming in from the cold

I've been using a Mac day-to-day since around 2014, initially as a refreshing break from the disappointment I felt with GNOME3, but since then a few coincidences have kept me on the platform. Something happened earlier in the year that made me start to think about a move back to Linux on the desktop. My next work hardware refresh is due in March next year, which gives me about nine months to "un-plumb" myself from the Mac ecosystem. From the top of my head, here's the things I'm going to have to address:

  • the command modifier key (⌘). It's a small thing but its use on the Mac platform is very consistent, and since it's not used at all within terminals, there's never a clash between window management and terminal applications. Compared to the morass of modifier keys on Linux, I will miss it. It's possible if I settle on a desktop environment and spend some time configuring it I can get to a similarly comfortable place. Similarly, I'd like to stick to one clipboard, and if possible, ignore the select-to-copy, middle-click-to-paste one entirely. This may be an issue for older software.

  • The Mac hardware trackpad and gestures are honestly fantastic. I still have some residual muscle memory of using the Thinkpad trackpoint, and so I'm weaning myself off the trackpad by using an external thinkpad keyboard with the work Mac, and increasingly using a x61s where possible.

  • SizeUp. I wrote about this in useful mac programs. It's a window management helper that lets you use keyboard shortcuts to move move and resize windows. I may need something similar, depending on what desktop environment I settle on. (I'm currently evaluating Awesome WM).

  • 1Password. These days I think a password manager is an essential piece of software, and 1Password is a very, very good example of one. There are several other options now, but sadly none that seem remotely as nice as 1Password. Ryan C Gordon wrote 1pass, a Linux-compatible tool to read a 1Password keychain, but it's quite raw and needs some love. By coincidence that's currently his focus, and one can support him in this work via his Patreon.

  • Font rendering. Both monospace and regular fonts look fantastic out of the box on a Mac, and it can be quite hard to switch back and forth between a Mac and Linux due to the difference in quality. I think this is a mixture of ensuring the font rendering software on Linux is configured properly, but also that I install a reasonable selection of fonts.

I think that's probably it: not a big list! Notably, I'm not locked into iTunes, which I avoid where possible; Apple's Photo app (formerly iPhoto) which is a bit of a disaster; nor Time Machine, which is excellent, but I have a backup system for other things in place that I can use.

26 June, 2017 04:18PM

June 25, 2017

Andreas Bombe

PDP-8/e Replicated — Introduction

I am creating a replica of the DEC PDP-8/e architecture in an FPGA from schematics of the original hardware. So how did I end up with a project like this?

The story begins with me wanting to have a computer with one of those front panels that have many, many lights where you can really see, in real time, what the computer is doing while it is executing code. Not because I am nostalgic for a prior experience with any of those — I was born a bit too late for that and my first computer as a kid was a Commodore 64.

Now, the front panel era ended around 40 years ago with the advent of microprocessors and computers of that age and older that are complete and working are hard to find and not cheap. And even if you do, there’s the issue of weight, size (complete systems with peripherals fill at least a rack) and power consumption. So what to do — build myself a small one with modern technology of course.

While there’s many computer architectures of that era to choose from, the various PDP machines by DEC are significant and well known (and documented) due to their large numbers. The most important are probably the 12 bit PDP-8, the 16 bit PDP-11 and the 36 bit PDP-10. While the PDP-11 is enticing because of the possibility to run UNIX I wanted to start with something simpler, so I chose the PDP-8.

My implementation on display next to a real PDP-8/e at VCFe 18.0

My implementation on display next to a real PDP-8/e at VCFe 18.0

The Original

DEC started the PDP-8 line of computers programmed data processors designed as low cost machines in 1965. It is a quite minimalist 12 bit architecture based on the earlier PDP-5, and by minimalist I mean seriously minimal. If you are familiar with early 8 bit microprocessors like the 6502 or 8080 you will find them luxuriously equipped in comparison.

The PDP-8 base architecture has a program counter (PC) and an accumulator (AC)1. That’s it. There are no pointer or index registers2. There is no stack. It has addition and AND instructions but subtractions and OR operations have to be manually coded. The optional Extended Arithmetic Element adds the MQ register but that’s really it for visible registers. The Wikipedia page on the PDP-8 has a good detailed description.

Regarding technology, the PDP-8 series has been in production long enough to get the whole range of implementations from discrete transistor logic to microprocessors. The 8/e which I target was right in the middle, implemented in TTL logic where each IC contains multiple logic elements. This allowed the CPU itself (including timing generator) to fit on three large circuit boards plugged into a backplane. Complete systems would have at least another board for the front panel and multiple boards for the core memory, then additional boards for whatever options and peripherals were desired.

Design Choices and Comparisons

I’m not the only one who had the idea to build something like that, of course. Among the other modern PDP-8 implementations with a front panel, probably the most prominent project is the Spare Time Gizmos SBC6120 which is a PDP-8 single board computer built around the Harris/Intersil HD-6120 microprocessor, which implementes the PDP-8 architecture, combined with a nice front panel. Another is the PiDP-8/I, which is another nice front panel (modeled after the 8/i which has even more lights) driven by the simh simulator running under Linux on a Raspberry Pi.

My goal is to get front panel lights that appear exactly like the real ones in operation. This necessitates driving the lights at full speed as they change with every instruction or even within instructions for some display selections. For example, if you run a tight loop that does nothing but increment AC while displaying that register, it would appear that all lights are lit at equal but less than full brightness. The reason is that the loop runs at such a high speed that even the most significant bit, which is blinking the slowest, is too fast to see flicker. Hence they are all effectively 50% on, just at different frequencies, and appear to be constantly light at the same brightness.

This is where the other projects lack what I am looking for. The PiDP-8/I is a multiplexed display which updates at something like 30 Hz or 60 Hz, taking whatever value is current in the simulation software at the time. All the states the lights took inbetween are lost and consequently there is flickering where there shouldn’t be. On the SBC6120 at least the address lines appear to update at full speed as these are the actual RAM address lines. However the used 6120 microprocessor does not have required data for the indicator display externally available. Instead, the SBC6120 runs an interrupt at 30 Hz to trap into its firmware/monitor program which then reads the current state and writes it to the front panel display, which is essentially just another peripheral. A different considerable problem with the SBC6120 is its use of the 6100 microprocessor family ICs, which are themselves long out of production and not trivial (or cheaply) to come by.

Given that the way to go is to drive all lights in step with every cycle3, this can be done by a software running on a dedicated microcontroller — which is how I started — or by implementing a real CPU with all the needed outputs in an FPGA — which is the project I am writing about.

In the next post I will give on overview of the hardware I built so far and some of the features that are yet to be implemented.


  1. With an associated link bit which is a little different from a carry bit in that it is treated as a thirteenth bit, i.e. it will be flipped rather than set when a carry occurs. [return]
  2. Although there are 8 specially treated memory addresses that will pre-increment when used in indirect addressing. [return]
  3. Basic cycles on the PDP-8/e are 1.4 µs for memory modifying cycles and fast cycles of 1.2 µs for everything else. Instructions can be one to three cycles long. [return]

25 June, 2017 06:58PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Frame queue management in Nageru 1.6.1

Nageru 1.6.1 is on its way, and what was intended to only be a release centered around monitoring improvements (more specifically a full set of native Prometheus] metrics) actually ended up getting a fairly substantial change to how Nageru manages its frame queues. To understand what's changing and why, it's useful to first understand the history of Nageru's queue management. Nageru 1.0.0 started out with a fairly simple scheme, but with some basics that are still relevant today: One of the input cards was deemed the master card, and whenever it delivers a frame, the master clock ticks and an output frame is produced. (There are some subtleties about dropped frames and/or the master card changing frame rates, but I'm going to ignore them, since they're not important to the discussion.)

To this end, every card keeps a preallocated frame queue; when a card delivers a frame, it's put into the queue, and when the master clock ticks, it tries picking out one frame from each of the other card's queues to mix together. Note that “mix” here could be as simple as picking one input and throwing all the other ones away; the queueing algorithm doesn't care, it just feeds all of them to the theme and lets that run whatever GPU code it needs to match the user's preferences.

The only thing that really keeps the queues bounded is that the frames in them are preallocated (in GPU memory), so if one queue gets longer than 16 frames, Nageru starts dropping it. But is 16 the right number? There are two conflicting demands here, ignoring memory usage:

  • You want to keep the latency down.
  • You don't want to run out of frames in the queue if you can avoid it; if you drop too aggressively, you could find yourself at the next frame with nothing in the queue, because the input card hasn't delivered it yet when the master card ticks. (You could argue one should delay the output in this case, but for how long? And if you're using HDMI/SDI output, you have no such luxury.)

The 1.0.0 scheme does about as well as one could possibly hope in never dropping frames, but unfortunately, it can be pretty poor at latency. For instance, if your master card runs at 50 Hz and you have a 60 Hz card, the latter will eventually build up a delay of 16 * 16.7 ms = 266.7 ms—clearly unacceptable, and rather unneeded.

You could ask the user to specify a queue length, but the user probably doesn't know, and also shouldn't really have to care—more knobs to twiddle are a bad thing, and even more so knobs the user is expected to twiddle. Thus, Nageru 1.2.0 introduced queue autotuning; it keeps a running estimate on how big the queue needs to be to avoid underruns, simply based on experience. If we've been dropping frames on a queue and then there's an underrun, the “safe queue length” is increased by one, and if the queue has been having excess frames for more than a thousand successive master clock ticks, we reduce it by one again. Whenever the queue has more than this “safe” number, we drop frames.

This was simple, effective and largely fixed the problem. However, when adding metrics, I noticed a peculiar effect: Not all of my devices have equally good clocks. In particular, when setting up for 1080p50, my output card's internal clock (which assumes the role of the master clock when using HDMI/SDI output) seems to tick at about 49.9998 Hz, and my simple home camcorder delivers frames at about 49.9995 Hz. Over the course of an hour, this means it produces one more frame than you should have… which should of course be dropped. Having an SDI setup with synchronized clocks (blackburst/tri-level) would of course fix this problem, but most people are not so lucky with their cameras, not to mention the price of PC graphics cards with SDI outputs!

However, this happens very slowly, which means that for a significant amount of time, the two clocks will very nearly be in sync, and thus racing. Who ticks first is determined largely by luck in the jitter (normal is maybe 1ms, but occasionally, you'll see delayed delivery of as much as 10 ms), and this means that the “1000 frames” estimate is likely to be thrown off, and the result is hundreds of dropped frames and underruns in that period. Once the clocks have diverged enough again, you're off the hook, but again, this isn't a good place to be.

Thus, Nageru 1.6.1 change the algorithm around yet again, by incorporating more data to build an explicit jitter model. 1.5.0 was already timestamping each frame to be able to measure end-to-end latency precisely (now also exposed in Prometheus metrics), but from 1.6.1, they are actually used in the queueing algorithm. I ran several eight- to twelve-hour tests and simply stored all the event arrivals to a file, and then simulated a few different algorithms (including the old algorithm) to see how they fared in measures such as latency and number of drops/underruns.

I won't go into the full details of the new queueing algorithm (see the commit if you're interested), but the gist is: Based on the last 5000 frames, it tries to estimate the maximum possible jitter for each input (ie., how late the frame could possibly be). Based on this as well as clock offsets, it determines whether it's really sure that there will be an input frame available on the next master tick even if it drops the queue, and then trims the queue to fit.

The result is pretty satisfying; here's the end-to-end latency of my camera being sent through to the SDI output:

As you can see, the latency goes up, up, up until Nageru figures it's now safe to drop a frame, and then does it in one clean drop event; no more hundreds on drops involved. There are very late frame arrivals involved in this run—two extra frame drops, to be precise—but the algorithm simply determines immediately that they are outliers, and drops them without letting them linger in the queue. (Immediate dropping is usually preferred to sticking around for a bit and then dropping it later, as it means you only get one disturbance event in your stream as opposed to two. Of course, you can only do it if you're reasonably sure it won't lead to more underruns later.)

Nageru 1.6.1 will ship before Solskogen, as I intend to run it there :-) And there will probably be lovely premade Grafana dashboards from the Prometheus data. Although it would have been a lot nicer if Grafana were more packaging-friendly, so I could pick it up from stock Debian and run it on armhf. Hrmf. :-)

25 June, 2017 04:43PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Obnam 1.22 released (backup application)

I've just released version 1.22 of Obnam, my backup application. It is the first release for this year. Packages are available on code.liw.fi/debian and in Debian unstable, and source is in git. A summary of the user-visible changes is below.

For those interested in living dangerously and accidentally on purpose deleting all their data, the link below shows that status and roadmap for FORMAT GREEN ALBATROSS. http://distix.obnam.org/obnam-dev/182bd772889544d5867e1a0ce4e76652.html

Version 1.22, released 2017-06-25

  • Lars Wirzenius made Obnam log the full text of an Obnam exception/error message with more than one line. In particular this applies to encryption error messages, which now log the gpg output.

  • Lars Wirzenius made obnam restore require absolute paths for files to be restored.

  • Lars Wirzenius made obnam forget use a little less memory. The amount depends on the number of genrations and the chunks they refer to.

  • Jan Niggemann updated the German translation of the Obnam manual to match recent changes in the English version.

  • SanskritFritz and Ian Cambell fixed the kdirstat plugin.

  • Lars Wirzenius changed Obnam to hide a Python stack trace when there's a problem with the SSH connection (e.g., failure to authenticate, or existing connection breaks).

  • Lars Wirzenius made the Green Albatross version of obnam forget actually free chunks that are no longer used.

25 June, 2017 12:41PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Dreams don’t cost a penny, mumma’s boy :)

This one I promise will be short 🙂

After the last two updates, I thought it was time for something positive to share. While I’m doing the hard work (of physiotherapy which is gruelling), the only luxury I have nowadays is of falling on to dreams which are also rare. The dream I’m going to share is totally unrealistic as my mum hates travel but I’m sure many sons and daughters would identify with it. Almost all travel outside of relatives is because I dragged her onto it.

In the dream, I am going to a Debconf, get bursary and the conference is being held somewhere in Europe, maybe Paris (2019 probably) 🙂 . I reach and attend the conference, present and generally have a great time sharing and learning from my peers. As we all do in these uncertain times, I too phone home every couple of days ensuring her that I’m well and in best of health. The weather is mild like it was in South Africa and this time I had come packed with woollens so all was well.

Just the day before the conference is to end, I call up mum and she tells about a specific hotel/hostel which I should check out. I am somewhat surprised that she knows of a specific hotel/hostel in a specific place but I accede to her request. I go there to find a small, quiet, quaint bed & breakfast place (dunno if Paris has such kind of places), something which my mum would like. Intuitively, I ask at the reception to look in the register. After seeing my passport, he too accedes to my request and shows me the logbook of all visitors registering to come in the last few days (for some reason privacy is not a concern then) . I am surprised to find my mother’s name in the register and she had turned up just a day or two before.

I again request the reception to be able to go to room abcd without being announced and have some sort of key (like for maintenance or laundry as an excuse) . The reception calls the manager and after looking copy of mother’s passport and mine they somehow accede to the request with help located nearby just in case something goes wrong.

I disguise my voice and announce as either Room service or Maintenance and we are both surprised and elated to see each other. After talking a while, I go back to the reception and register myself as her guest.

The next week is a whirlwind as I come to know of hop-on, hop-off buses similar service in Paris as was in South Africa. I buy a small booklet and we go through all the museums. the vineyards or whatever it is that Paris has to offer. IIRC there is also a lock and key on a famous bridge. We also do that. She is constantly surprised at the different activities the city shows her and the mother-son bond becomes much better.

I had shared the same dream with her and she laughed. In reality, she is happy and comfortable in the confines of her home.

Still I hope some mother-daughter, son-father, son-mother or any combination of parent, sibling takes this entry and enrich each other by travelling together.


Filed under: Miscellenous Tagged: #dream, #mother, #planet-debian, travel

25 June, 2017 04:52AM by shirishag75

June 24, 2017

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Qt 5.7 submodules that didn't make it to Stretch but will be in testing

There are two Qt 5.7 submodules that we could not package in time for Strech but are/will be available in their 5.7 versions in testing. This are qtdeclarative-render2d-plugin and qtvirtualkeyboard.

declarative-render2d-plugin makes use of the Raster paint engine instead of OpenGL to render the  contents of a scene graph, thus making it useful when Qt Quick2 applications  are run in a system without OpenGL 2  enabled hardware. Using it might require tweaking Debian's /etc/X11/Xsession.d/90qt5-opengl. On Qt 5.9 and newer this plugin is merged in Qt GUI so there should be no need to perform any action on the user's behalf.

Debian's VirtualKeyboard currently has a gotcha: we are not building it with the embedded code it ships. Upstream ships 3rd party code but lacks a way to detect and use the system versions of them. See QTBUG-59594, patches are welcomed. Please note that we prefer patches sent directly upstream to the current dev revision, we will be happy to backport patches if necessary.
Yes, this means no hunspell, openwnn, pinyin, tcime nor lipi-toolkit/t9write support.


24 June, 2017 10:41PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Steve Kemp

Steve Kemp

Linux security modules, round two.

So recently I wrote a Linux Security Module (LSM) which would deny execution of commands, unless an extended attribute existed upon the filesystem belonging to the executables.

The whitelist-LSM worked well, but it soon became apparent that it was a little pointless. Most security changes are pointless unless you define what you're defending against - your "threat model".

In my case it was written largely as a learning experience, but also because I figured it seemed like it could be useful. However it wasn't actually as useful because you soon realize that you have to whitelist too much:

  • The redis-server binary must be executable, to the redis-user, otherwise it won't run.
  • /usr/bin/git must be executable to the git user.

In short there comes a point where user alice must run executable blah. If alice can run it, then so can mallory. At which point you realize the exercise is not so useful.

Taking a step back I realized that what I wanted to to prevent was the execution of unknown/unexpected, and malicious binaries How do you identify known-good binaries? Well hashes & checksums are good. So for my second attempt I figured I'd not look for a mere "flag" on a binary, instead look for a valid hash.

Now my second LSM is invoked for every binary that is executed by a user:

  • When a binary is executed the sha1 hash is calculated of the files contents.
  • If that matches the value stored in an extended attribute the execution is permitted.
    • If the extended-attribute is missing, or the checksum doesn't match, then the execution is denied.

In practice this is the same behaviour as the previous LSM - a binary is either executable, because there is a good hash, or it is not, because it is missing or bogus. If somebody deploys a binary rootkit this will definitely stop it from executing, but of course there is a huge hole - scripting-languages:

  • If /usr/bin/perl is whitelisted then /usr/bin/perl /tmp/exploit.pl will succeed.
  • If /usr/bin/python is whitelisted then the same applies.

Despite that the project was worthwhile, I can clearly describe what it is designed to achieve ("Deny the execution of unknown binaries", and "Deny binaries that have been modified"), and I learned how to hash a file from kernel-space - which was surprisingly simple.

(Yes I know about IMA and EVM - this was a simple project for learning purposes. Public-key signatures will be something I'll look at next/soon/later. :)

Perhaps the only other thing to explore is the complexity in allowing/denying actions based on the user - in a human-readable fashion, not via UIDs. So www-data can execute some programs, alice can run a different set of binaries, and git can only run /usr/bin/git.

Of course down that path lies apparmour, selinux, and madness..

24 June, 2017 09:00PM

Ingo Juergensmann

Upgrade to Debian Stretch - GlusterFS fails to mount

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:

[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/letsencrypt.sh/certs'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount

After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:

JoeJulian | ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian | It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).

Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:

We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.

Setting this option instantly fixed my issues with mounting glusterfs storages.

So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.

Kategorie: 
 

24 June, 2017 06:45PM by ij

hackergotchi for Riku Voipio

Riku Voipio

Cross-compiling with debian stretch

Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

24 June, 2017 04:03PM by Riku Voipio (noreply@blogger.com)

hackergotchi for Norbert Preining

Norbert Preining

Calibre 3 for Debian

I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

24 June, 2017 03:42AM by Norbert Preining

June 23, 2017

hackergotchi for Joachim Breitner

Joachim Breitner

The perils of live demonstrations

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <cdsmith@gmail.com>
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.


  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

23 June, 2017 11:54PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Joey Hess

Joey Hess

PV array is hot

Only took a couple hours to wire up and mount the combiner box.

PV combiner box with breakers

Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.

PV combiner box wiring

And the new PV array is hot!

multimeter reading 66.8 DVC

Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

23 June, 2017 08:43PM

Elena 'valhalla' Grandi

On brokeness, the live installer and being nice to people

On brokeness, the live installer and being nice to people

This morning I've read this blog.einval.com/2017/06/22#tro.

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

23 June, 2017 02:57PM by Elena ``of Valhalla''

hackergotchi for Jonathan Dowland

Jonathan Dowland

WD drive head parking update

An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

23 June, 2017 02:28PM

Bits from Debian

Hewlett Packard Enterprise Platinum Sponsor of DebConf17

HPElogo

We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf17 website at https://debconf17.debconf.org.

23 June, 2017 02:15PM by Laura Arjona Reina

Arturo Borrero González

Backup router/switch configuration to a git repository

git

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

#!/bin/bash
# run this script by cron

USER="git"
PASSWORD="readonlyuser"
DEVICES="device1 device2 device3 device4"

FILE="flash:/startup.cfg"
GIT_DIR="myrepo"
GIT="/srv/git/${GIT_DIR}.git"

TMP_DIR="$(mktemp -d)"
if [ -z "$TMP_DIR" ] ; then
	echo "E: no temp dir created" >&2
	exit 1
fi

GIT_BIN="$(which git)"
if [ ! -x "$GIT_BIN" ] ; then
	echo "E: no git binary" >&2
	exit 1
fi

SCP_BIN="$(which scp)"
if [ ! -x "$SCP_BIN" ] ; then
	echo "E: no scp binary" >&2
	exit 1
fi

SSHPASS_BIN="$(which sshpass)"
if [ ! -x "$SSHPASS_BIN" ] ; then
	echo "E: no sshpass binary" >&2
	exit 1
fi

# clone git repo
cd $TMP_DIR
$GIT_BIN clone $GIT
cd $GIT_DIR

for device in $DEVICES; do
	mkdir -p $device
	cd $device

	# fetch cfg
	CONN="${USER}@${device}"
	$SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} .

	# commit
	$GIT_BIN add -A .
	$GIT_BIN commit -m "${device}: configuration change" \
		-m "A configuration change was detected" \
		--author="cron <cron@example.com>"

	$GIT_BIN push -f
	cd ..
done

# cleanup
rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage
 password hash xxxxx
 service-type ssh
 authorization-attribute user-role security-audit
#

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system {
	[...]
	login {
		[...]
		class git {
			permissions maintenance;
			allow-commands scp.*;
		}
		user git {
			uid xxx;
			class git;
			authentication {
				encrypted-password "xxx";
			}
		}
	}
}

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit
aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

23 June, 2017 08:10AM

June 22, 2017

hackergotchi for Steve McIntyre

Steve McIntyre

-1, Trolling

Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services <tecservices911@gmail.com>
Date: Wed, 21 Jun 2017 22:30:26 -0700
To: steve@einval.com
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!

wtf!!!

you need to be retired...due to being retarded..

and that this was dedicated to ian...what a
disaster..you should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.

22 June, 2017 09:59PM

John Goerzen

First Experiences with Stretch

I’ve done my first upgrades to Debian stretch at this point. The results have been overall good. On the laptop my kids use, I helped my 10-year-old do it, and it worked flawlessly. On my workstation, I got a kernel panic on boot. Hmm.

Unfortunately, my system has to use the nv drivers, which leaves me with an 80×25 text console. It took some finagling (break=init in grub, then manually insmoding the appropriate stuff based on modules.dep for nouveau), but finally I got a console so I could see what was breaking. It appeared that init was crashing because it couldn’t find liblz4. A little digging shows that liblz4 is in /usr, and /usr wasn’t mounted. I’ve filed the bug on systemd-sysv for this.

I run root on ZFS, and further digging revealed that I had datasets named like this:

  • tank/hostname-1/ROOT
  • tank/hostname-1/usr
  • tank/hostname-1/var

This used to be fine. The mountpoint property of the usr dataset put it at /usr without incident. But it turns out that this won’t work now, unless I set ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs for some reason. So I renamed them so usr was under ROOT, and then the system booted.

Then I ran samba not liking something in my bind interfaces line (to be fair, it did still say eth0 instead of br0). rpcbind was failing in postinst, though a reboot seems to have helped that. More annoying was that I had trouble logging into my system because resolv.conf was left empty (despite dns-* entries in /etc/network/interfaces and the presence of resolvconf). I eventually repaired that, and found that it kept removing my “search” line. Eventually I removed resolvconf.

Then mariadb’s postinst was silently failing. I eventually discovered it was sending info to syslog (odd), and /etc/init.d/apparmor teardown let it complete properly. It seems like there may have been an outdated /etc/apparmor.d/cache/usr.sbin.mysql out there for some reason.

Then there was XFCE. I use it with xmonad, and the session startup was really wonky. I had to zap my sessions, my panel config, etc. and start anew. I am still not entirely sure I have it right, but I at do have a usable system now.

22 June, 2017 01:19PM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.2.0

A new version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

Thanks to a metric ton of work by Leonardo Silvestri, the package now uses S4 classes internally allowing for greater consistency of operations on nanotime objects.

Changes in version 0.2.0 (2017-06-22)

  • Rewritten in S4 to provide more robust operations (#17 by Leonardo)

  • Ensure tz="" is treated as unset (Leonardo in #20)

  • Added format and tz arguments to nanotime, format, print (#22 by Leonardo and Dirk)

  • Ensure printing respect options()$max.print, ensure names are kept with vector (#23 by Leonardo)

  • Correct summary() by defining names<- (Leonardo in #25 fixing #24)

  • Report error on operations that are meaningful for type; handled NA, NaN, Inf, -Inf correctly (Leonardo in #27 fixing #26)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 June, 2017 12:16PM

hackergotchi for Norbert Preining

Norbert Preining

Signal handling in R

Recently I have been programming quite a lot in R, and today stumbled over the problem to implement a kind of monitoring loop in R. Typically that would be a infinite loop with sleep calls, but I wanted to allow for waking up from the sleep via sending UNIX style signals, in particular SIGINT. After some searching I found Beyond Exception Handling: Conditions and Restarts from the Advanced R book. But it didn’t really help me a lot to program an interrupt handler.

My requirements were:

  • an interruption of the work-part should be immediately restarted
  • an interruption of the sleep-part should go immediately into the work-part

Unfortunately it seems not to be possible to ignore interrupts at all from with the R code. The best one can do is install interrupt handlers and try to repeat the code which was executed while the interrupt happened. This is what I tried to implement with the following code below. I still have to digest the documentation about conditions and restarts, and play around a lot, but at least this is an initial working version.

workfun <- function() {
  i <- 1
  do_repeat <- FALSE
  while (TRUE) {
    message("begin of the loop")
    withRestarts(
      {
        # do all the work here
        cat("Entering work part i =", i, "\n");
        Sys.sleep(10)
        i <- i + 1
        cat("finished work part\n")
      }, 
      gotSIG = function() { 
        message("interrupted while working, restarting work part")
        do_repeat <<- TRUE
        NULL
      }
    )
    if (do_repeat) {
      cat("restarting work loop\n")
      do_repeat <- FALSE
      next
    } else {
      cat("between work and sleep part\n")
    }
    withRestarts(
      {
        # do the sleep part here
        cat("Entering sleep part i =", i, "\n")
        Sys.sleep(10)
        i <- i + 1
        cat("finished sleep part\n")
      }, 
      gotSIG = function() {
        message("got work to do, waking up!")
        NULL
      }
    )
    message("end of the loop")
  }
}

cat("Current process:", Sys.getpid(), "\n");

withCallingHandlers({
    workfun()
  },
  interrupt = function(e) {
    invokeRestart("gotSIG")
  })

While not perfect, I guess I have to live with this method for now.

22 June, 2017 08:28AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.3 (and 0.2.2)

A new minor version 0.2.3 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

This version ensures that we set the TZDIR environment variable correctly on the old dreaded OS that does not come with proper timezone information---an issue which had come up while preparing the next (and awesome, trust me) release of nanotime. It also appears that I failed to blog about 0.2.2, another maintenance release, so changes for both are summarised next.

Changes in version 0.2.3 (2017-06-19)

  • On Windows, the TZDIR environment variable is now set in .onLoad()

  • Replaced init.c with registration code inside of RcppExports.cpp thanks to Rcpp 0.12.11.

Changes in version 0.2.2 (2017-04-20)

  • Synchronized with upstream CCTZ

  • The time_point object is instantiated explicitly for nanosecond use which appears to be required on macOS

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 June, 2017 01:06AM

June 21, 2017

hackergotchi for Joey Hess

Joey Hess

DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help.

house with 4 solar panels on roof

I had three goals for this install:

  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.

My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself.

I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops).

My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted.

Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today.

Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up.

roof next to the ground with a couple of cinderblock steps

The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it.

clamp temporarily attached to side of panel

I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing.

some pipe with 5 wires running through it, attached to the side of the roof

While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it.

Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

21 June, 2017 10:42PM

Reproducible builds folks

Reproducible Builds: week 112 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 11 and Saturday June 17 2017:

Upcoming events

Upstream patches and bugs filed

Reviews of unreproducible packages

1 package review has been added, 19 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Edmund Grimley Evans (1)

diffoscope development

tests.reproducible-builds.org

As you might have noticed, Debian stretch was released last week. Since then, Mattia and Holger renamed our testing suite to stretch and added a buster suite so that we keep our historic results for stretch visible and can continue our development work as usual. In this sense, happy hacking on buster; may it become the best Debian release ever and hopefully the first reproducible one!

  • Vagrant Cascadian:
  • Valerie Young: Add highlighting in navigation for the new nodes health pages.
  • Mattia Rizzolo:
    • Do not dump database ACL in the backups.
    • Deduplicate SSLCertificateFile directive into the common-directives-ssl macro
    • Apache: t.r-b.o: redirect /testing/ to /stretch/
    • db: s/testing/stretch/g
    • Start adding code to test buster...
  • Holger Levsen:
    • Update README.infrastructure to explain who has root access where.
    • reproducible_nodes_info.sh: correctly recognize zero builds per day.
    • Add build nodes health overview page, then split it in three: health overview, daily munin graphs and weekly munin graphs.
    • reproducible_worker.sh: improve handling of systemctl timeouts.
    • reproducible_build_service: sleep less and thus restart failed workers sooner.
    • Replace ftp.(de|uk|us).debian.org with deb.debian.org everywhere.
    • Performance page: also show local problems with _build_service.sh (which are autofixed after a maximum of 133.7 minutes).
    • Rename nodes_info job to html_nodes_info.
    • Add new node health check jobs, split off from maintenance jobs, run every 15 minutes.
      • Add two new checks: 1. for correct future (2019 is incorrect atm, and we sometimes got that). 2.) for writeable /tmp (sometimes happens on borked armhf nodes).
    • Add jobs for testing buster.
    • s/testing/stretch/g in all the code.
    • Finish the code to deal with buster.
    • Teach jessie and Ubuntu 16.04 how to debootstrap buster.

Axel Beckert is currently in the process of setting up eight LeMaker HiKey960 boards. These boards were sponsored by Hewlett Packard Enterprise and will be hosted by the SOSETH students association at ETH Zurich. Thanks to everyone involved here and also thanks to Martin Michlmayr and Steve Geary who initiated getting these boards to us.

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

21 June, 2017 04:27PM

hackergotchi for Vincent Bernat

Vincent Bernat

IPv4 route lookup on Linux

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).


During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, …), this function should quickly provide a decision. Some possible options are:

  • local delivery (RTN_LOCAL),
  • forwarding to a supplied next hop (RTN_UNICAST),
  • silent discard (RTN_BLACKHOLE).

Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie

Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let’s assume the following routing table:

$ ip route show scope global table 100
default via 203.0.113.5 dev out2
192.0.2.0/25
        nexthop via 203.0.113.7  dev out3 weight 1
        nexthop via 203.0.113.9  dev out4 weight 1
192.0.2.47 via 203.0.113.3 dev out1
192.0.2.48 via 203.0.113.3 dev out1
192.0.2.49 via 203.0.113.3 dev out1
192.0.2.50 via 203.0.113.3 dev out1

Here are some examples of lookups and the associated results:

Destination IP Next hop
192.0.2.49 203.0.113.3 via out1
192.0.2.50 203.0.113.3 via out1
192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP)
192.0.2.200 203.0.113.5 via out2

A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie

The following trie encodes the previous routing table:

Simple routing trie

For each node, the prefix is known by its path from the root node and the prefix length is the current depth.

A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry.

Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32).

Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie

In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie:

Patricia trie

Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn’t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf:

Lookup in a Patricia trie

The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough.

Many routing systems are using Patricia trees:

Lookup with a level-compressed trie

In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie:

Level-compressed trie

Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree.

An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable.

Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux

The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9).

Here is the representation of our example routing table in memory3:

Memory representation of a trie

There are several structures involved:

The trie can be retrieved through /proc/net/fib_trie:

$ cat /proc/net/fib_trie
Id 100:
  +-- 0.0.0.0/0 2 0 2
     |-- 0.0.0.0
        /0 universe UNICAST
     +-- 192.0.2.0/26 2 0 1
        |-- 192.0.2.0
           /25 universe UNICAST
        |-- 192.0.2.47
           /32 universe UNICAST
        +-- 192.0.2.48/30 2 0 1
           |-- 192.0.2.48
              /32 universe UNICAST
           |-- 192.0.2.49
              /32 universe UNICAST
           |-- 192.0.2.50
              /32 universe UNICAST
[...]

For internal nodes, the numbers after the prefix are:

  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.

Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:

$ cat /proc/net/fib_triestat
Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes.
Id 100:
        Aver depth:     2.33
        Max depth:      3
        Leaves:         6
        Prefixes:       6
        Internal nodes: 3
          2: 3
        Pointers: 12
Null ptrs: 4
Total size: 1  kB
[...]

When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector.

The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited.

Internal nodes and null pointers

Performance

So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function:

Maximum depth and lookup time

The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast.

When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn’t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores.

Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds:

Insertion time

Memory usage

The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn’t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is.

Memory usage

The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules

Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Linux will first lookup for a match in the local table. If it doesn’t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables

The local table contains routes for local delivery:

$ ip route show table local
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55
local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55
broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55

This table is populated automatically by the kernel when addresses are configured. Let’s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:

  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.

When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:

$ ping -c1 127.42.42.42
PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data.
64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms

--- 127.42.42.42 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms

The main table usually contains all the other routes:

$ ip route show table main
default via 192.168.117.1 dev eno1 proto static metric 100
192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100

The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface.

The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using “classes” in Linux 2.1.156.

Performance

Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times:

Routing rules impact on performance

For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns).

A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:

# ip rule add iif vlan457 table 10
# ip rule add iif vlan457 blackhole
# ip rule add iif vlan458 table 20
# ip rule add iif vlan458 blackhole

The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:

# ip route add blackhole default metric 9999 table 10
# ip route add blackhole default metric 9999 table 20
# ip rule add iif vlan457 table 10
# ip rule add iif vlan458 table 20

To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:

# ip link add vrf-A type vrf table 10
# ip link set dev vrf-A up
# ip link add vrf-B type vrf table 20
# ip link set dev vrf-B up
# ip link set dev vlan457 master vrf-A
# ip link set dev vlan458 master vrf-B
# ip rule show
0:      from all lookup local
1000:   from all lookup [l3mdev-table]
32766:  from all lookup main
32767:  from all lookup default

The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface.

VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion

The takeaways from this article are:

  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.

  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites. 

  2. IP-address lookup using LC-tries”, IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999. 

  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable. 

  4. One leaf can contain several routes (struct fib_alias is a list). The number of “prefixes” can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits. 

  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The kernel is Linux 4.11. The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC

  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today’s kernel tree and explains the usage of the “default class”

21 June, 2017 09:19AM by Vincent Bernat

June 20, 2017

hackergotchi for Steve McIntyre

Steve McIntyre

So, Stretch happened...

Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.

Party

Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

20 June, 2017 10:21PM

Andreas Bombe

New Blog

So I finally got myself a blog to write about my software and hardware projects, my work in Debian and, I guess, stuff. Readers of planet.debian.org, hi! If you can see this I got the configuration right.

For the curious, I’m using a static site generator for this blog — Hugo to be specific — like all the cool kids do these days.

20 June, 2017 10:09PM

Foteini Tsiami

Internationalization, part one

The first part of internationalizing a Greek application, is, of course, translating all the Greek text to English. I already knew how to open a user interface (.ui) file with Glade and how to translate/save it from there, and mail the result to the developers.

If only it was that simple! I learned that the code of most open source software is kept on version control systems, which fortunately are a bit similar to Wikis, which I was familiar with, so I didn’t have a lot of trouble understanding the concepts. Thanks to a very brief git crash course from my mentors, I was able to quickly start translating, committing, and even pushing back the updated files.

The other tricky part was internationalizing the python source code. There Glade couldn’t be used, a text editor like Pluma was needed. And the messages were part of the source code, so I had to be extra careful not to break the syntax. The English text then needed to be wrapped around _(), which does the gettext call which dynamically translates the messages into the user language.

All this was very educative, but now that the first part of the internationalization, i.e. the Greek-to-English translations, are over, I think I’ll take some time to read more about the tools that I used!


20 June, 2017 10:00AM by fottsia

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2017 hits Debian/unstable

Yesterday I uploaded the first packages of TeX Live 2017 to Debian/unstable, meaning that the new release cycle has started. Debian/stretch was released over the weekend, and this opened up unstable for new developments. The upload comprised the following packages: asymptote, cm-super, context, context-modules, texlive-base, texlive-bin, texlive-extra, texlive-extra, texlive-lang, texworks, xindy.

I mentioned already in a previous post the following changes:

  • several packages have been merged, some are dropped (eg. texlive-htmlxml) and one new package (texlive-plain-generic) has been added
  • luatex got updated to 1.0.4, and is now considered stable
  • updmap and fmtutil now require either -sys or -user
  • tlmgr got a shell mode (interactive/scripting interface) and a new feature to add arbitrary TEXMF trees (conf auxtrees)

The last two changes are described together with other news (easy TEXMF tree management) in the TeX Live release post. These changes more or less sum up the new infra structure developments in TeX Live 2017.

Since the last release to unstable (which happened in 2017-01-23) about half a year of package updates have accumulated, below is an approximate list of updates (not split into new/updated, though).

Enjoy the brave new world of TeX Live 2017, and please report bugs to the BTS!

Updated/new packages:
academicons, achemso, acmart, acro, actuarialangle, actuarialsymbol, adobemapping, alkalami, amiri, animate, aomart, apa6, apxproof, arabluatex, archaeologie, arsclassica, autoaligne, autobreak, autosp, axodraw2, babel, babel-azerbaijani, babel-english, babel-french, babel-indonesian, babel-japanese, babel-malay, babel-ukrainian, bangorexam, baskervaldx, baskervillef, bchart, beamer, beamerswitch, bgteubner, biblatex-abnt, biblatex-anonymous, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-caspervector, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-claves, biblatex-enc, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-ieee, biblatex-iso690, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-oxref, biblatex-philosophy, biblatex-publist, biblatex-shortfields, biblatex-subseries, bibtexperllibs, bidi, biochemistry-colors, bookcover, boondox, bredzenie, breqn, bxbase, bxcalc, bxdvidriver, bxjalipsum, bxjaprnind, bxjscls, bxnewfont, bxorigcapt, bxpapersize, bxpdfver, cabin, callouts, chemfig, chemformula, chemmacros, chemschemex, childdoc, circuitikz, cje, cjhebrew, cjk-gs-integrate, cmpj, cochineal, combofont, context, conv-xkv, correctmathalign, covington, cquthesis, crimson, crossrefware, csbulletin, csplain, csquotes, css-colors, cstldoc, ctex, currency, cweb, datetime2-french, datetime2-german, datetime2-romanian, datetime2-ukrainian, dehyph-exptl, disser, docsurvey, dox, draftfigure, drawmatrix, dtk, dviinfox, easyformat, ebproof, elements, endheads, enotez, eqnalign, erewhon, eulerpx, expex, exsheets, factura, facture, fancyhdr, fbb, fei, fetamont, fibeamer, fithesis, fixme, fmtcount, fnspe, fontmfizz, fontools, fonts-churchslavonic, fontspec, footnotehyper, forest, gandhi, genealogytree, glossaries, glossaries-extra, gofonts, gotoh, graphics, graphics-def, graphics-pln, grayhints, gregoriotex, gtrlib-largetrees, gzt, halloweenmath, handout, hang, heuristica, hlist, hobby, hvfloat, hyperref, hyperxmp, ifptex, ijsra, japanese-otf-uptex, jlreq, jmlr, jsclasses, jslectureplanner, karnaugh-map, keyfloat, knowledge, komacv, koma-script, kotex-oblivoir, l3, l3build, ladder, langsci, latex, latex2e, latex2man, latex3, latexbug, latexindent, latexmk, latex-mr, leaflet, leipzig, libertine, libertinegc, libertinus, libertinust1math, lion-msc, lni, longdivision, lshort-chinese, ltb2bib, lualatex-math, lualibs, luamesh, luamplib, luaotfload, luapackageloader, luatexja, luatexko, lwarp, make4ht, marginnote, markdown, mathalfa, mathpunctspace, mathtools, mcexam, mcf2graph, media9, minidocument, modular, montserrat, morewrites, mpostinl, mptrees, mucproc, musixtex, mwcls, mweights, nameauth, newpx, newtx, newtxtt, nfssext-cfr, nlctdoc, novel, numspell, nwejm, oberdiek, ocgx2, oplotsymbl, optidef, oscola, overlays, pagecolor, pdflatexpicscale, pdfpages, pdfx, perfectcut, pgfplots, phonenumbers, phonrule, pkuthss, platex, platex-tools, polski, preview, program, proofread, prooftrees, pst-3dplot, pst-barcode, pst-eucl, pst-func, pst-ode, pst-pdf, pst-plot, pstricks, pstricks-add, pst-solides3d, pst-spinner, pst-tools, pst-tree, pst-vehicle, ptex2pdf, ptex-base, ptex-fontmaps, pxbase, pxchfon, pxrubrica, pythonhighlight, quran, ran_toks, reledmac, repere, resphilosophica, revquantum, rputover, rubik, rutitlepage, sansmathfonts, scratch, seealso, sesstime, siunitx, skdoc, songs, spectralsequences, stackengine, stage, sttools, studenthandouts, svg, tcolorbox, tex4ebook, tex4ht, texosquery, texproposal, thaienum, thalie, thesis-ekf, thuthesis, tikz-kalender, tikzmark, tikz-optics, tikz-palattice, tikzpeople, tikzsymbols, titlepic, tl17, tqft, tracklang, tudscr, tugboat-plain, turabian-formatting, txuprcal, typoaid, udesoftec, uhhassignment, ukrainian, ulthese, unamthesis, unfonts-core, unfonts-extra, unicode-math, uplatex, upmethodology, uptex-base, urcls, variablelm, varsfromjobname, visualtikz, xassoccnt, xcharter, xcntperchap, xecjk, xepersian, xetexko, xevlna, xgreek, xsavebox, xsim, ycbook.

20 June, 2017 01:09AM by Norbert Preining

June 19, 2017

Jeremy Bicha

GNOME Tweak Tool 3.25.3

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

Here are a few visual highlights of this release.

The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

For details of these and other changes, see the commit log or the NEWS file.

GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

19 June, 2017 11:15PM by Jeremy Bicha

hackergotchi for Shirish Agarwal

Shirish Agarwal

Seizures, Vigo and bi-pedal motion

Dear all, an update is in order. While talking to physiotherapist couple of days before, came to know the correct term to what was I experiencing. I had experienced convulsive ‘seizure‘ , spasms being a part of it. Reading the wikipedia entry and the associated links/entries it seems I am and was very very lucky.

The hospital or any hospital is a very bad bad place. I have seen all horror movies which people say are disturbing but have never been disturbed as much as I was in hospital. I couldn’t help but hear people’s screams and saw so many cases which turned critical. At times it was not easy to remain positive but dunno from where there was a will to live which pushed me and is still pushing me.

One of the things that was painful for a long time were the almost constant stream of injections that were injected in me. It was almost an afterthought that the nurse put a Vigo in me.

Similar to the Vigo injected in me.

While the above medical device is similar, mine had a cross, the needle was much shorter and is injected into the vein. After that all injections are injected into that including common liquid which is salt,water and something commonly given to patients to stabilize first. I am not remembering the name atm.

I also had a urine bag which was attached to my penis in a non-invasive manner. Both my grandfather and grandma used to cry when things went wrong while I didn’t feel any pain but when the urine bag was disattached and attached again, so seems things have improved there.

I was also very conscious of getting bed sores as both my grandpa and grandma had them when in hospital. As I had no strength I had to beg. plead do everything to make sure that every few hours I was turned from one side to other. I also had an air bag which is supposed to alleviate or relief this condition.

Constant physiotherapy every day for a while slowly increased my strength and slowly both the vigo and feeding tube put inside my throat was removed.

I have no remembrance as to when they had put the feeding tube as it was all rubber and felt bad when it came out.

Further physiotherapy helped me crawl till the top of the bed, the bed was around 6 feet in length and and more than enough so I could turn both sides without falling over.

Few days later I found I could also sit up using my legs as a lever and that gave confidence to the doctors to remove the air bed so I could crawl more easily.

Couple of more days later I stood on my feet for the first time and it was like I had lead legs. Each step was painful but the sense and feeling of independence won over whatever pain was there.

I had to endure wet wipes from nurses and ward boys in place of a shower everyday and while they were respectful always it felt humiliating.

The first time I had a bath after 2 weeks or something, every part of my body cried and I felt like a weakling. I had thought I wouldn’t be able to do justice to the physiotherapy session which was soon after but after the session was back to feeling normal.

For a while I was doing the penguin waddle which while painful was also had humor in it. I did think of shooting the penguin waddle but decided against it as I was half-naked most of the time ( the hospital clothes never fit me properly)

Cut to today and I was able to climb up and down the stairs on my own and circled my own block, slowly but was able to do it on my own by myself.

While I always had a sense of wonderment for bi-pedal motion as well as all other means of transport, found much more respect of walking. I live near a fast food eating joint so I see lot of youngsters posing in different ways with their legs to show interest to their mates. And this I know happens both on the conscious and sub-conscious levels. To be able to see and discern that also put a sense of wonder in nature’s creations.

All in all, I’m probabl6y around 40% independent and still 60% interdependent. I know I have to be patient with myself and those around me and explain to others what I’m going through.

For e.g. I still tend to spill things and still can’t touch-type much.

So, the road is long, I can only pray and hope best wishes for anybody who is my condition and do pray that nobody goes through what I went through, especiallly not children.

I am also hoping that things like DxtER and range of non-invasive treatments make their way into India and the developing world at large.

Anybody who is overweight and is either disgusted or doesn’t like the gym route, would recommend doing sessions with a physiotherapist that you can trust. You have to trust that her judgement will push you a bit more and not more that the gains you make are toppled over.

I still get dizziness spells while doing therapy but will to break it as I know dizziness doesn’t help me.

I hope my writings give strength and understanding to either somebody who is going through it, or relatives or/and caregivers so they know the mental status of the person who’s going through it.

Till later and sorry it became so long.

Update – I forgot to share this inspirational story from my city which I shared with a friend days ago. Add to that, she is from my city. What it doesn’t share is that Triund is a magical place. I had visited once with a friend who had elf ears (he had put on elf ears) and it is kind of place which alchemist talks about, a place where imagination does turn wild and there is magic in the air.


Filed under: Miscellenous Tagged: #air bag, #bed sores, #convulsive epileptic seizure, #crawling, #horror, #humiliation, #nakedness, #penguin waddle, #physiotherapy, #planet-debian, #spilling things, #urine bag, #Vigo medical device

19 June, 2017 04:49PM by shirishag75

hackergotchi for Vasudev Kamath

Vasudev Kamath

Update: - Shell pipelines with subprocess crate and use of Exec::shell function

In my previous post I used Exec::shell function from subprocess crate and passed it string generated by interpolating --author argument. This string was then run by the shell via Exec::shell. After publishing post I got ping on IRC by Jonas Smedegaard and Paul Wise that I should replace Exec::shell, as it might be prone to errors or vulnerabilities of shell injection attack. Indeed they were right, in hurry I did not completely read the function documentation which clearly mentions this fact.

When invoking this function, be careful not to interpolate arguments into the string run by the shell, such as Exec::shell(format!("sort {}", filename)). Such code is prone to errors and, if filename comes from an untrusted source, to shell injection attacks. Instead, use Exec::cmd("sort").arg(filename).

Though I'm not directly taking input from untrusted source, its still possible that the string I got back from git log command might contain some oddly formatted string with characters of different encoding which could possibly break the Exec::shell , as I'm not sanitizing the shell command. When we use Exec::cmd and pass argument using .args chaining, the library takes care of creating safe command line. So I went in and modified the function to use Exec::cmd instead of Exec::shell.

Below is updated function.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::cmd("git")
     .args(&["clone", "--bare", repo, tempdir.path().to_str().unwrap()])
     .stdout(subprocess::NullFile)
     .stderr(subprocess::NullFile)
     .popen()?;

    let author_process = {
        Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
        Exec::shell(OsStr::new("sort -u"))
    }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let author_string = format!("--author={}", author);
        let first = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad",
                    "--date=format:%Y",
                    "--reverse",
                    &author_string])
             .cwd(tempdir.path()) | Exec::shell(OsStr::new("head -n1"))
        }.capture()?;

        let latest = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad", "--date=format:%Y", &author_string])
             .cwd(tempdir.path()) | Exec::shell("head -n1")
        }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

I still use Exec::shell for generating author list, this is not problematic as I'm not interpolating arguments to create command string.

19 June, 2017 03:18PM by copyninja

Hideki Yamane

PoC: use Sphinx for debian-policy

Before party, we did a monthly study meeting and I gave a talk about tiny hack for debian-policy document.

debian-policy was converted from debian-sgml to docbook in 4.0.0, and my proposal is "Go move forward to Sphinx".

Here's sample, and you can also get PoC source from my GitHub repo and check it.

19 June, 2017 01:09PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Michal Čihař

Michal Čihař

Call for Weblate translations

Weblate 2.15 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.15 should be released early next week.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate

19 June, 2017 04:00AM

June 18, 2017

Simon Josefsson

OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status

Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ 

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$ 

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----

hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

18 June, 2017 10:42PM by simon

hackergotchi for Alexander Wirt

Alexander Wirt

alioth needs your help

It may look that the decision for pagure as alioth replacement is already finalized, but that’s not really true. I got a lot of feedback and tips in the last weeks, those made postpone my decision. Several alternative systems were recommended to me, here are a few examples:

and probably several others. I won’t be able to evaluate all of those systems in advance of our sprint. That’s where you come in: if you are familiar with one of those systems, or want to get familiar with them, join us on our mailing list and create a wiki page below https://wiki.debian.org/Alioth/GitNext with a review of your system.

What do we need to know?

  • Feature set compared to current alioth
  • Feature set compared to a popular system like github
  • Some implementation designs
  • Some information about scaling (expect something like 15.000 > 25.000 repos)
  • Support for other version control systems
  • Advantages: why should we choose that system
  • Disadvantages: why shouldn’t we choose that system
  • License
  • Other interesting features
  • Details about extensibility
  • A really nice thing would be a working vagrant box / vagrantfile + ansible/puppet to test things

If you want to start on such a review, please announce it on the mailinglist.

If you have questions, ask me on IRC, Twitter or mail. Thanks for your help!

18 June, 2017 07:06PM

hackergotchi for Eriberto Mota

Eriberto Mota

Como migrar do Debian Jessie para o Stretch

Bem vindo ao Debian Stretch!

Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie).

Passos iniciais

  • A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais.
  • O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos:
# apt-get update
# apt-get dist-upgrade

Migrando

  • Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades):
deb http://ftp.br.debian.org/debian/ stretch main
deb-src http://ftp.br.debian.org/debian/ stretch main
                                                                                                                                
deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main
  • Depois, execute:
# apt-get update
# apt-get dist-upgrade

Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando:

# apt-get dist-upgrade

Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas.

Alterações em arquivos de configuração

Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico.

Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell:

Ficheiro de configuração '/etc/rsyslog.conf'
 ==> Modificado (por si ou por um script) desde a instalação.
 ==> O distribuidor do pacote lançou uma versão atualizada.
 O que deseja fazer? As suas opções são:
 Y ou I : instalar a versão do pacote do maintainer
 N ou O : manter a versão actualmente instalada
 D : mostrar diferenças entre as versões
 Z : iniciar uma shell para examinar a situação
 A ação padrão é manter sua versão atual.
*** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ?

A tela a seguir é um exemplo de mensagem via janela:

Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf.

Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo arquivo tenha sido instalado após uma escolha via shell, o arquivo original (o que você tinha anteriormente) terá o mesmo nome com a extensão .dpkg-old. No caso de escolha via janela, o arquivo será mantido com a extensão .ucf-old. Nos dois casos, você poderá ver as modificações feitas e reconfigurar o seu novo arquivo de acordo com as necessidades.

Caso você precise de ajuda para ver as diferenças entre os arquivos, você poderá usar o comando diff para compará-los. Faça o diff sempre do arquivo novo para o original. É como se você quisesse ver como fazer com o novo arquivo para ficar igual ao original. Exemplo:

# diff -Naur /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Em uma primeira vista, as linhas marcadas com "+" deverão ser adicionadas ao novo arquivo para que se pareça com o anterior, assim como as marcadas com "-" deverão ser suprimidas. Mas cuidado: é normal que haja algumas linhas diferentes, pois o arquivo de configuração foi feito para uma nova versão do serviço ou aplicativo ao qual ele pertence. Assim, altere somente as linhas que realmente são necessárias e que você mudou no arquivo anterior. Veja o exemplo:

+daemon.*;mail.*;\
+ news.err;\
+ *.=debug;*.=info;\
+ *.=notice;*.=warn |/dev/xconsole
+*.* @sam

No meu caso, originalmente, eu só alterei a última linha. Então, no novo arquivo de configuração, só terei interesse em adicionar essa linha. Bem, se foi você quem fez a configuração anterior, você saberá fazer a coisa certa. Geralmente, não haverá muitas diferenças entre os arquivos.

Outra opção para ver as diferenças entre arquivos é o comando mcdiff, que poderá ser fornecido pelo pacote mc. Exemplo:

# mcdiff /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Problemas com ambientes e aplicações gráficas

É possível que você tenha algum problema com o funcionamento de ambientes gráficos, como Gnome, KDE etc, ou com aplicações como o Mozilla Firefox. Nesses casos, é provável que o problema seja os arquivos de configuração desses elementos, existentes no diretório home do usuário. Para verificar, crie um novo usuário no Debian e teste com ele. Se tudo der certo, faça um backup das configurações anteriores (ou renomeie as mesmas) e deixe que a aplicação crie uma configuração nova. Por exemplo, para o Mozilla Firefox, vá ao diretório home do usuário e, com o Firefox fechado, renomeie o diretório .mozilla para .mozilla.bak, inicie o Firefox e teste.

Está inseguro?

Caso você esteja muito inseguro, instale um Debian 8, com ambiente gráfico e outras coisas, em uma máquina virtual e migre para Debian 9 para testar e aprender. Sugiro VirtualBox como virtualizador.

Divirta-se!

 

18 June, 2017 05:58PM by Eriberto

hackergotchi for Michal Čihař

Michal Čihař

python-gammu for Windows

It has been few months since I'm providing Windows binaries for Gammu, but other parts of the family were still missing. Today, I'm adding python-gammu.

Unlike previous attempts which used crosscompilation on Linux using Wine, this is also based on AppVeyor. Still I don't have to touch Windows to do that, what is nice :-). This has been introducted in python-gammu 2.9 and depend on Gammu 1.38.4.

What is good on this is that pip install python-gammu should now work with binary packages if you're using Python 3.5 or 3.6.

Maybe I'll find time to look at option providing Wammu as well, but it's more tricky there as it doesn't support Python 3, while the python-gammu for Windows can currently only be built for Python 3.5 and 3.6 (due to MSVC dependencies of older Python versions).

Filed under: Debian English Gammu python-gammu Wammu

18 June, 2017 04:00PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Rust - Shell like Process pipelines using subprocess crate

I had to extract copyright information from the git repository of the crate upstream. The need aroused as part of updating debcargo, tool to create Debian package source from the Rust crate.

General idea behind taking copyright information from git is to extract starting and latest contribution year for every author/committer. This can be easily achieved using following shell snippet

for author in $(git log --format="%an" | sort -u); do
   author_email=$(git log --format="%an <%ae>" --author="$author" | head -n1)
   first=$(git \
   log --author="$author" --date=format:%Y --format="%ad" --reverse \
             | head -n1)
   latest=$(git log --author="$author" --date=format:%Y --format="%ad" \
             | head -n1)
   if [ $first -eq $latest ]; then
       echo "$first, $author_email"
   else
       echo "$first-$latest, $author_email"
   fi
done

Now challenge was to execute these command in Rust and get the required answer. So first step was I looked at std::process, default standard library support for executing shell commands.

My idea was to execute first command to extract authors into a Rust vectors or array and then have 2 remaining command to extract years in a loop. (Yes I do not need additional author_email command in Rust as I can easily get both in the first command which is used in for loop of shell snippet and use it inside another loop). So I setup to 3 commands outside the loop with input and output redirected, following is snippet should give you some idea of what I tried to do.

let authors_command = Command::new("/usr/bin/git")
             .arg("log")
             .arg("--format=\"%an <%ae>\"")
             .spawn()?;
let output = authors_command.wait()?;
let authors: Vec<String> = String::from_utf8(output.stdout).split('\n').collect();
let head_n1 = Command::new("/usr/bin/head")
             .arg("-n1")
             .stdin(Stdio::piped())
             .stdout(Stdio::piped())
             .spwn()?;
for author in &authors {
             ...
}

And inside the loop I would create additional 2 git commands read their output via pipe and feed it to head command. This is where I learned that it is not straight forward as it looks :-). std::process::Command type does not implement Copy nor Clone traits which means one use of it I will give up the ownership!. And here I started fighting with borrow checker. I need to duplicate declarations to make sure I've required commands available all the time. Additionally I needed to handle error output at every point which created too many nested statements there by complicating the program and reducing its readability

When all started getting out of control I gave a second thought and wondered if it would be good to write down this in shell script ship it along with debcargo and use the script Rust program. This would satisfy my need but I would need to ship additional script along with debcargo which I was not really happy with.

Then a search on crates.io revealed subprocess, a crate designed to be similar with subprocess module from Python!. Though crate is not highly downloaded it still looked promising, especially the trait implements a trait called BitOr which allows use of | operator to chain the commands. Additionally it allows executing full shell commands without need of additional chaining of argument which was done above snippet. End result a much simplified easy to read and correct function which does what was needed. Below is the function I wrote to extract copyright information from git repo.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::shell(OsStr::new(format!("git clone --bare {} {}",
                                repo,
                                tempdir.path().to_str().unwrap())
                              .as_str())).stdout(subprocess::NullFile)
                              .stderr(subprocess::NullFile)
                              .popen()?;

    let author_process = {
         Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
         Exec::shell(OsStr::new("sort -u"))
     }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let reverse_command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y \
                                    --reverse",
                                   author);
        let command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y",
                           author);
        let first = {
             Exec::shell(OsStr::new(&reverse_command)).cwd(tempdir.path()) |
             Exec::shell(OsStr::new("head -n1"))
         }.capture()?;

         let latest = {
             Exec::shell(OsStr::new(&command)).cwd(tempdir.path()) | Exec::shell("head -n1")
         }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

Of course it is not as short as the shell or probably Python code, but that is fine as Rust is system level programming language (which is intended to replace C/C++) and doing complex Shell code (complex due to need of shell pipelines) in approximately 50 lines of code in safe and secure way is very much acceptable. Besides code is as much readable as a plain shell snippet thanks to the | operator implemented by subprocess crate.

18 June, 2017 03:29PM by copyninja

Hideki Yamane

Debian9 release party in Tokyo

We celebrated Debian9 "stretch" release in Tokyo (thanks to Cybozu, Inc. for the place).








We enjoyed beer, wine, sake, soft drinks, pizza, sandwich, snacks and cake&coffee (Nicaraguan one, it reminds me DebConf12 :)

18 June, 2017 11:31AM by Hideki Yamane (noreply@blogger.com)

Bits from Debian

Debian 9.0 Stretch has been released!

Alt Stretch has been released

Let yourself be embraced by the purple rubber toy octopus! We're happy to announce the release of Debian 9.0, codenamed Stretch.

Want to install it? Choose your favourite installation media among Blu-ray Discs, DVDs, CDs and USB sticks. Then read the installation manual.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 8 Jessie installation, please read the release notes.

Do you want to celebrate the release? Share the banner from this blog in your blog or your website!

18 June, 2017 06:25AM by Ana Guerrero Lopez and Laura Arjona Reina

hackergotchi for Jonathan Carter

Jonathan Carter

AIMS Desktop 2017.1 is available!

Back at DebConf 15 in Germany, I gave a talk on on AIMS Desktop (which was then based on Ubuntu), and our intentions and rationale for wanting to move it over to being Debian based.

Today, alongside the Debian 9 release, we release AIMS Desktop 2017.1, the first AIMS Desktop released based on Debian. For Debian 10, we’d like to get the last remaining AIMS Desktop packages into Debian so that it could be a Debian pure blend.

Students trying out a release candidate at AIMS South Africa

It’s tailored to the needs of students, lecturers and researchers at the African Institute for Mathemetical Sciences, we’re releasing it to the public in the hope that it could be useful for other tertiary education users with an interest in maths and science software. If you run a mirror at your university, it would also be great if you could host a copy. we added an rsync location on the downloads page which you could use to keep it up to date.

18 June, 2017 04:55AM by jonathan

Debian 9 is available!

Congratulations to everyone who has played a part in the creation of Debian GNU/Linux 9.0! It’s a great release, I’ve installed the pre-release versions for friends, family and colleagues and so far the feedback has been very positive.

This release is dedicated to Ian Murdock, who founded the Debian project in 1993, and sadly passed away on 28 December 2015. On the Debian ISO files a dedication statement is available on /doc/dedication/dedication-9.0.txt

Here’s a copy of the dedication text:

Dedicated to Ian Murdock
------------------------

Ian Murdock, the founder of the Debian project, passed away
on 28th December 2015 at his home in San Francisco. He was 42.

It is difficult to exaggerate Ian's contribution to Free
Software. He led the Debian Project from its inception in
1993 to 1996, wrote the Debian manifesto in January 1994 and
nurtured the fledgling project throughout his studies at
Purdue University.

Ian went on to be founding director of Linux International,
CTO of the Free Standards Group and later the Linux
Foundation, and leader of Project Indiana at Sun
Microsystems, which he described as "taking the lesson
that Linux has brought to the operating system and providing
that for Solaris".

Debian's success is testament to Ian's vision. He inspired
countless people around the world to contribute their own free
time and skills. More than 350 distributions are known to be
derived from Debian.

We therefore dedicate Debian 9 "stretch" to Ian.

-- The Debian Developers

During this development cycle, the amount of source packages in Debian grew from around 21 000 to around 25 000 packages, which means that there’s a whole bunch of new things Debian can make your computer do. If you find something new in this release that you like, post about it on your favourite social networks, using the hashtag #newinstretch – or look it up to see what others have discovered!

18 June, 2017 04:00AM by jonathan

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

The Community Data Science Collective Dataverse

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.


This post was also published on the Community Data Science Collective blog.

18 June, 2017 02:35AM by Benjamin Mako Hill

June 17, 2017

hackergotchi for Alexander Wirt

Alexander Wirt

Survey about alioth replacement

To get some idea about the expectations and current usage of alioth I created a survey. Please take part in it if you are an alioth user. If you need some background about the coming alioth replacement I recommend to read the great lwn article written by anarcat.

17 June, 2017 10:38PM

June 16, 2017

Bits from Debian

Upcoming Debian 9.0 Stretch!

Alt Stretch is coming on 2017-06-17

The Debian Release Team in coordination with several other teams are preparing the last bits needed for releasing Debian 9 Stretch. Please, be patient! Lots of steps are involved and some of them take some time, such as building the images, propagating the release through the mirror network, and rebuilding the Debian website so that "stable" points to Debian 9.

Follow the live coverage of the release on https://micronews.debian.org or the @debian profile in your favorite social network! We'll spread the word about what's new in this version of Debian 9, how the release process is progressing during the weekend and facts about Debian and the wide community of volunteer contributors that make it possible.

16 June, 2017 10:30PM by Laura Arjona Reina

Elena 'valhalla' Grandi

Travel piecepack v0.1

Travel piecepack v0.1

Immagine/fotosocial.gl-como.it/photos/valha

A www.piecepack.org/ set of generic board game pieces is nice to have around in case of a sudden spontaneous need of gaming, but carrying my full set www.trueelena.org/fantastic/fe takes some room, and is not going to fit in my daily bag.

I've been thinking for a while that an half-size set could be useful, and between yesterday and today I've actually managed to do the first version.

It's (2d) printed on both sides of a single sheet of heavy paper, laminated and then cut, comes with both the basic suites and the playing card expansion and fits in a mint tin divided by origami boxes.

It's just version 0.1 because there are a few issues: first of all I'm not happy with the manual way I used to draw the page: ideally it would have been programmatically generated from the same svg files as the 3d piecepack (with the ability to generate other expansions), but apparently reading paths from an svg and writing it in another svg is not supported in an easy way by the libraries I could find, and looking for it was starting to take much more time than just doing it by hand.

I also still have to assemble the dice; in the picture above I'm just using the ones from the 3d-printed set, but they are a bit too big and only four of them fit in the mint tin. I already have the faces printed, so this is going to be fixed in the next few days.

Source files are available in the same git repository as the 3d-printable piecepack git.trueelena.org/cgit.cgi/3d/, with the big limitation mentioned above; updates will also be pushed there, just don't hold your breath for it :)

16 June, 2017 04:06PM by Elena ``of Valhalla''

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

16 June, 2017 04:00PM

June 15, 2017

Mike Hommey

Announcing git-cinnabar 0.5.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?

  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.

15 June, 2017 11:12PM by glandium

Jeremy Bicha

#newinstretch : Latest WebKitGTK+

GNOME Web (Epiphany) in Debian 9 "Stretch"

Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

15 June, 2017 04:02PM by Jeremy Bicha

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Apollo 440

It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

So, without further ado, here are their songs:

  • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
  • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
  • Krupa: Also a very up-cheering song!

As always, enjoy!

/music | permanent link | Comments: 3 | Flattr this

15 June, 2017 10:27AM by Rhonda

Enrico Zini

5 years of Debian Diversity Statement

The Debian Project welcomes and encourages participation by everyone.

No matter how you identify yourself or how others perceive you: we welcome you. We welcome contributions from everyone as long as they interact constructively with our community.

While much of the work for our project is technical in nature, we value and encourage contributions from those with expertise in other areas, and welcome them into our community.

The Debian Diversity Statement has recently turned 5 years old, and I still find it the best diversity statement I know of, one of the most welcoming texts I've seen, and the result of one of the best project-wide mailing list discussions I can remember.

15 June, 2017 07:37AM

June 14, 2017

hackergotchi for Joey Hess

Joey Hess

not tabletop solar

Borrowed a pickup truck today to fetch my new solar panels. This is 1 kilowatt of power on my picnic table.

solar panels on picnic table

14 June, 2017 09:48PM

hackergotchi for Steve Kemp

Steve Kemp

Porting pfctl to Linux

If you have a bunch of machines running OpenBSD for firewalling purposes, which is pretty standard, you might start to use source-control to maintain the rulesets. You might go further, and use some kind of integration testing to deploy changes from your revision control system into production.

Of course before you deploy any pf.conf file you need to test that the file contents are valid/correct. If your integration system doesn't run on OpenBSD though you have a couple of choices:

  • Run a test-job that SSH's to the live systems, and tests syntax.
    • Via pfctl -n -f /path/to/rules/pf.conf.
  • Write a tool on your Linux hosts to parse and validate the rules.

I looked at this last year and got pretty far, but then got distracted. So the other day I picked it up again. It turns out that if you're patient it's not hard to use bison to generate some C code, then glue it together such that you can validate your firewall rules on a Linux system.

  deagol ~/pf.ctl $ ./pfctl ./pf.conf
  ./pf.conf:298: macro 'undefined_variable' not defined
  ./pf.conf:298: syntax error

Unfortunately I had to remove quite a lot of code to get the tool to compile, which means that while some failures like that above are caught others are missed. The example above reads:

vlans="{vlan1,vlan2}"
..
pass out on $vlans proto udp from $undefined_variable

Unfortunately the following line does not raise an error:

pass out on vlan12 inet proto tcp from <unknown> to $http_server port {80,443}

That comes about because looking up the value of the table named unknown just silently fails. In slowly removing more and more code to make it compile I lost the ability to keep track of table definitions - both their names and their values - Thus the fetching of a table by name has become a NOP, and a bogus name will result in no error.

Now it is possible, with more care, that you could use a hashtable library, or similar, to simulate these things. But I kinda stalled, again.

(Similar things happen with fetching a proto by name, I just hardcoded inet, gre, icmp, icmp6, etc. Things that I'd actually use.)

Might be a fun project for somebody with some time anyway! Download the OpenBSD source, e.g. from a github mirror - yeah, yeah, but still. CVS? No thanks! - Then poke around beneath sbin/pfctl/. The main file you'll want to grab is parse.y, although you'll need to setup a bunch of headers too, and write yourself a Makefile. Here's a hint:

  deagol ~/pf.ctl $ tree
  .
  ├── inc
  │   ├── net
  │   │   └── pfvar.h
  │   ├── queue.h
  │   └── sys
  │       ├── _null.h
  │       ├── refcnt.h
  │       └── tree.h
  ├── Makefile
  ├── parse.y
  ├── pf.conf
  ├── pfctl.h
  ├── pfctl_parser.h
  └── y.tab.c

  3 directories, 11 files

14 June, 2017 09:00PM

hackergotchi for Michael Prokop

Michael Prokop

Grml 2017.05 – Codename Freedatensuppe

The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via grml.org/download.

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

14 June, 2017 08:46PM by mika

hackergotchi for Daniel Pocock

Daniel Pocock

Croissants, Qatar and a Food Computer Meetup in Zurich

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

14 June, 2017 07:53PM by Daniel.Pocock