I live in a ruralish area--20 minute drive from the local university, but I'm surrounded by farms. I like the solitude and the space. I can't see any of my neighbors--the closest I get to that is seeing their utility pole light through the trees in the fall/winter. For the same price/month as a 1-bedroom, 1-car garage condo in town, I get a 3-bedroom house + drive-in basement + pole barn. Couldn't care less about the house, but the space in the basement is nice for bigger projects. I see a lot of folks out this way with backyard shops too.
I keep a few chickens; if I wanted I could have a huge garden too. Plenty of people keep cows, horses, chickens, etc.
Power is through a co-op; in my experience it's both cheaper and more reliable than the in-town utilities. Well water tastes good and works as long as I have power.
Internet I get through a WISP, it's not fast nor cheap but it's better than satellite (and, at the time, cell service). I did put up a 50' tower for that, which was quite an adventure.
I'm a grad student; the university is perhaps the biggest employer in the area but there are still some factory jobs and also some commercial science jobs in the area. My landlord used to be the butcher and farmed/raised cattle in his 'spare' time. Sadly, there's significantly fewer local businesses in the area than there were a decade or two ago, according to folks who have been here that long.
I wonder if closed-access publishing is part of why academia seems so insluated from the "real world". People write for their audience, and if the general public can't read academic papers, then academics are going to write as if only other academics are reading.
Likewise, if research output is difficult to access, the feedback loop between ideas and implementation is broken; folks outside academia can't easily comment on the cutting edge work in a field, and academics only have to worry about what other academics think of their work.
Closed-access isn't the factor here; I'm in a field where pretty much all relevant publications are open access, and the same happens.
We're writing that way because we are the explicit intended audience anyway - the purpose of journals and conferences is not to write about our research and see who wants to read it, but entirely the opposite, to make up a venue that a particular research community wants to read and then ask for submissions that would be interesting to other researchers.
Public dissemination is not part of the research-publish feedback loop which drives the actual research, it's a (useful) output out of that loop but not really part of it.
The feedback loop is an exchange of novel research, not finding out "what others think of your work". If you've got a better method, that's interesting; if you've got a novel use case of a method that I know, that's interesting; if you've got experimental evidence that contradicts mine, that's interesting; but that's all novel research that would be publishable even with all the existing filters, and pretty much requires the person to be familiar with the field (by which I mean having spent at least hundreds of hours reading relevant research beforehand).
Comments, especially uninformed ones, have too low signal-to-noise to be worth reading - any active field of research produces more than a person can read anyway, so if anything the researchers want stricter filtering that reduces noise. That's a major purpose for the more selective venues; I want someone else to read and reject most of academic publishing so that I don't have to read it just to mutter the same objections that the reviewers would've had, just with more obscenities.
In my experience, academia seems insulated from the "real world", because specialization has increased and (maybe) also the overall scientific literacy in the population has decreased. I might be wrong about the second point, but the first one alone suffices to explain this perception. Even for scientists it has gotten harder to follow what's going on in neighboring disciplines.
In a nutshell, science has become more complicated and generally requires much harder math than it used to. Case in point: Almost everyone can learn the basics of Newtonian Mechanics or Special Relativity, but understanding the basics of String Theory is much, much harder. Our scientific knowledge has grown immensely.
That being said, your implicit assumption seems odd to me. The "general public" was probably never able to read and understand academic papers and folks outside academia never were able to easily comment on cutting edge work at any time in history. (Except for a few very gifted outliers maybe.)
Some things can't be written for a general audience. It can not be distilled that much.
That isn't to suggest that citizen scientists should be discouraged, just that some things are actually pretty difficult to explain and to grasp.
To be very clear, I support open publishing. I just feel obligated to point out that most of the general public isn't going to understand and it isn't ever going to be written with them in mind.
I'm a mathematician. I don't understand some of it.
Math is one thing. I did a lot of research into bio to check if Aspartame was dangerous, and while I didn't understand everything I was able to gleam enough.
I have also read many psycology papers, never had a problem with them.
Excellent. Again, I don't ever want to discourage learning. I just know many will not. That and, well, some of it can be pretty rough to digest.
But, absolutely keep reading, learning, growing, and sharing. I'm just not going to publish with the general public being the intended audience. On the other hand, I'm more than happy to try to explain stuff, for those who are curious.
Oh I didn't think you wanted to discourage anybody - I just wanted to point out that of the different sciencies theoretical math is probably the most difficult for an outsider to get and psychology is surprisingly easy.
Ah... That makes sense. I had another person reply as if I were suggesting I weren't for open access - even though I specifically mentioned it. Thus, I was improperly primed, methinks.
But, absolutely... The idea of the citizen scientist is so important to me. I have some complains about the scientific community but this may not be the place for that.
> Some things can't be written for a general audience. It can not be distilled that much.
The fact that things cannot be written for a general audience doesn't mean that we need to limit access to information which cannot be understood by a general audience.
When I have seen non-academic people latch on to academic papers (usually for political purposes), they've generally entirely misinterpreted them anyway. To the point of believing that the paper states the opposite of what it does, or even believing that the paper says something that it's explicitly stated it can't say.
Fact of the matter is... few people have any incentive to read academic papers in general, and often cannot understand the language or formats used.
Scientific output is often misrepresented by the media and misinterpreted by large portions of the public. This contributes a lot to the issue you've outlined.
Another problem is that even if all the research in the world were freely available, only a small portion of the population would still seek it out.
It doesn't matter how you get a copy of the paper; the citation is the same either way. Plenty of academics have been passing around PDF copies of papers their libraries don't buy access to for years before this!
Also relatively recently though probably pre-SciHub, #ICanHazPDF hashtag was widely used on Twitter to request a paper from whoever happens to have access to it.
It's more readable, but using std::function here introduces a second layer of indirection vs using a plain function pointer.
More specifically, std::function's operator() is virtual, and calls into a subclass that's specialized to function pointers of type void(int). The subclass then performs the actual function pointer call.
Technically function::operator() is not virtual (which wouldn't be very useful as std::function has value semantics), but it does runtime dispatching internally using an unspecified mechanism.
This can be virtual functions, or, more commonly, an hand rolled vtable. In the last case, if std::function is constructed with a function pointer exactly matching its signature it could in principle avoid the thunk and directly point to the function itself. I don't think most implementations bother.
Zotero will work the way it does with Chrome now: there will be a plugin that integrates with Zotero Standalone, which will continue to be developed using XULRunner.
I think this will be good in the long run--Zotero is complex enough to merit being its own application, and separating Zotero and browser plugins will make it easier to migrate Zotero off XULRunner at some point, if the developers choose to do so.
Laptops are indeed powered off of DC, which means that there will be a constant voltage measured across the positive and negative pins of the MagSafe plug.
However, the voltage measured from either pin to ground might very well fluctuate. Without the grounding pin on the power supply, the DC side of the supply may 'float' with respect to earth ground. If you plot the voltages measured from each pin to ground, it probably looks like a pair of sine waves, one shifted 20 volts above the other.
Right, so there's no relative phase change between negative and positive terminals but there is a phase change between the terminals and ground (which can be interpreted as a frequency relative to ground)? And since I'm grounded I experience that frequency? Interesting. Is there a relation between the "float frequency" and the AC freqency? Is it the same because of how commutators work?
I suspect that the float frequency is generally equal to the AC frequency, but I suppose it may be possible for it to be some subharmonic of the switching frequency, assuming you've got a switching power supply.
> To avoid extracting irrelevant features, the TSFRESH package has a built-in filtering procedure. This filtering procedure evaluates the explaining power and importance of each characteristic for the regression or classification tasks at hand.
> It is based on the well developed theory of hypothesis testing and uses a multiple test procedure. As a result the filtering process mathematically controls the percentage of irrelevant extracted features.
It seems that the relevance of the features is somewhat tunable based on the p-value you choose for the statistical tests. (Every feature selection algorithm I can think of has some tunable parameter, although the information theoretic ones just depend on the length of features you're willing to consider.)
The individual feature significance tests do not have any parameter, they just generate the p-values.
The only parameter that one can tune is the overall percentage of irrelevant extracted features. That is the expected FDR of the Benjamini yakutieli procedure.