If you missed anything on DZone this week, now's your chance to catch up! This week's best include a Spring MVC 3 view controller example, a look at the new mobile database, Realm, the Java origins of AngularJS, 5 quick points about threads in Java EE, and more.
Interruptions seem a central part of modern working life. Whether it’s open plan offices or the influx of digital tools into the workplace, a distraction never seems far away
When modelling data with ARIMA models, it is sometimes useful to plot the inverse characteristic roots. The following functions will compute and plot the inverse roots for any fitted ARIMA model (including seasonal models).
Certainly, Big Data applications are distributed largely because the size of the data on which computations are executed warrants more than a typical application can handle. But scaling the network that provides connectivity between Big Data nodes is not just about creating massive interconnects.
Previously, I was precalculating the completion which maximized the probability of the word, using some basterdized half-remembered version of bayes law. But I think there is a better approach, by running simulations on existing data.
Every week here and in our newsletter, we feature a new developer/blogger from the DZone community. This week we're talking to Pierre-Hugues Charbonneau, Senior IT Consultant and System Architect at CGI Inc., and author of DZone's 200th Refcard: Java Performance Optimization.
//[C# Code Sample]
Dictionary<string, Symbology> collection = new Dictionary<string, Symbology>();
collection.Add("Process Collection", Symbology.DataMatrix);
collection.Add("Dictionary Collection", Symbology.QR);
collection.Add("Aztec BarCode", Symbology.Aztec);
List<Bitmap> images = new List<Bitmap>();
foreach (KeyValuePair<string, Symbology> pair in collection)
using (BarCodeBuilder builder = new BarCodeBuilder())
builder.CodeText = pair.Key;
builder.SymbologyType = pair.Value;
int maxWidth = int.MinValue;
int sumHeight = 0;
foreach (Bitmap bmp in images)
sumHeight += bmp.Height;
if (maxWidth < bmp.Width)
maxWidth = bmp.Width;
const int offset = 10;
Bitmap resultBitmap = new Bitmap(maxWidth + offset * 2, sumHeight + offset * images.Count);
using (Graphics g = Graphics.FromImage(resultBitmap))
int yPosition = offset;
for (int i = 0; i < images.Count; ++i)
Bitmap currentBitmap = images[i];
g.DrawImage(currentBitmap, offset, yPosition);
yPosition += currentBitmap.Height + offset;
//[VB.NET Code Sample]
Dim collection As New Dictionary(Of String, Symbology)()
collection.Add("Process Collection", Symbology.DataMatrix)
collection.Add("Dictionary Collection", Symbology.QR)
collection.Add("Aztec BarCode", Symbology.Aztec)
Dim images As New List(Of Bitmap)()
For Each pair As KeyValuePair(Of String, Symbology) In collection
Using builder As New BarCodeBuilder()
builder.CodeText = pair.Key
builder.SymbologyType = pair.Value
Dim maxWidth As Integer = Integer.MinValue
Dim sumHeight As Integer = 0
For Each bmp As Bitmap In images
sumHeight += bmp.Height
If maxWidth < bmp.Width Then
maxWidth = bmp.Width
Const offset As Integer = 10
Dim resultBitmap As New Bitmap(maxWidth + offset * 2, sumHeight + offset * images.Count)
Using g As Graphics = Graphics.FromImage(resultBitmap)
Dim yPosition As Integer = offset
For i As Integer = 0 To images.Count - 1
Dim currentBitmap As Bitmap = images(i)
g.DrawImage(currentBitmap, offset, yPosition)
yPosition += currentBitmap.Height + offset
So you can see where I'm heading with the question posed at the title of this post. It is the reference resolution that is fundamental an indirection is simply what we do to (re)define what that resolution process ultimately looks like.
We got a customer question about a map/reduce index that produced the wrong results. The problem was a problem between the conceptual model and the actual model of how Map/Reduce actually works.
Since Scrum is an iterative process, you can have great success by focusing on things that you can say with a fair degree of confidence, while still allowing for some uncertainty in all planning and estimates.
Almost every New Product Introduction process includes some placeholder slots to talk about risk. Everyone knows enough to put mitigation plans in these slots. But those mitigation plans are drafted and then ignored by the vast majority of managers.
When you are making GitHub commits you have to provide a story that explains the changes you are committing to a repository. Many of us just post 'blah blah’, ‘what I said last time", or any other garbage that just gets us through the moment. You know you’ve all done it at some point.
-- Relative time with interval
DATE_SUB(NOW(), INTERVAL 25 HOUR)
-- Select all users that were updated in the last 24 hours.
SELECT * FROM users
WHERE users.updated > DATE_SUB(NOW(), INTERVAL 24 HOUR);
-- Select all users that were updated in the last 7 days.
SELECT * FROM users
WHERE users.updated > DATE_SUB(NOW(), INTERVAL 7 DAY);
Whenever a new networking platform is evaluated, one of the early sales calls includes a packet walkthrough. But why?
Econometrics is often “theory driven” while statistics tends to be “data driven”. I discovered this in the interview for my current job when someone criticized my research for being “data driven” and asked me to respond.
So, where did our new event fall on a scale compared to the others? Did I feel like attendees received value and did I feel like we got a good return on our investment? Let me give a little comparison.
Today, DZone released Refcard #200: Java Performance Optimization. To mark the significance of this milestone, this Refcard boasts a complete redesign - all the information you expect from a Refcard in a shiny new package!
It's dangerous to link to lines or blocks of code on Github without explicitly specifying the commit hash in the URL. This emits the Github URL to the HEAD commit on the current branch, specifying the commit hash in the URL
The administrators on the server noted the high I/O and that a single thread was constantly busy and decided that this is likely a hung thread. The long term fix was to actually make sure that we abort the operation after a while, report to the remote server that we scanned up to a point, and had nothing to show for it, and go back to the replication loop.
I thought it’d be interesting to create some visualisations around the times that people RSVP ‘yes’ to the various Neo4j events that we run in London. I tried to use ggplot to create a bar chart of the data. Unfortunately that resulted in this error: