python – Iterate over a matrix and save in a dictatorship

Since this matrix:

array ([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
...
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]], dtype = uint8)

I'm wanting to do it to count values ​​on each line and then save them by line.
The following code is doing the general sum. I would like to have the account by line.

def count (image):
array = np.array (image)
training[[ array == 0 ]]= 1
training[[ array == 255 ]]= 0
for row in matrix:
unique, counts = np.unique (array[row,] , return_counts = True)
d = dict (zip (unique, counts))
return new

The result:

{0: 234710, 1: 515}

Tensioning objects are only iterable when avid execution is enabled. To iterate over this tensor use tf.map_fn

I created a custom loss function and tried to execute it, which caused me an error.

mse = Custom_loss (y_real, y_pred, df_for_loss, price)

def Custom_loss (y_true, y_pred, df_for_loss, price, sample_weight = None, multioutput = & # 39; uniform_average & # 39;):

df_for_loss['y_pred']= y_pred
df_for_loss['agg_closing_stock'] = np.where (df_for_loss.agg_closing_stock> 0, (price / 2) * df_for_loss.agg_closing_stock, df_for_loss.agg_closing_stock)

df_for_loss['agg_closing_stock'] = np.where (df_for_loss.agg_closing_stock == 0, price * df_for_loss.y_pred, df_for_loss.agg_closing_stock)
df_for_loss.agg_closing_stock = df_for_loss.agg_closing_stock.astype (& # 39; float32 & # 39 ;, raise_on_error = False)
y_pred = y_pred.astype (& # 39; float32 & # 39 ;, raise_on_error = False)
data = tf.convert_to_tensor (df_for_loss.agg_closing_stock)
y_pred = K.tf.math.multiply (y_pred, data)

output_errors = np.average ((y_true - y_pred) ** 2, axis = 0,
weights = sample_weight)
if it is an instance (multioutput, string_types):
If multioutput == & # 39; raw_values ​​& # 39;
return output_errors
elif multioutput == & # 39; uniform_average & # 39 ;:
# pass None as weights to np.average: uniform average
multioutput = None

returns np.average (output_errors, weights = multioutput)

python 3.x – Iterate through groups of rows with different indexed values

The data looks like this:

Data = {& # 39; group_id: [‘1′,’1′,’1′,’1′,’2′,’2′,’2’],
& # 39; fountain & # 39; [‘Twitter’, ‘Instagram’, ‘Twitter’, ‘Facebook’, ‘Facebook’, ‘Twitter’, ‘Instagram’, ‘Facebook’]
& # 39; Gravity & # 39; [4,2,7,4,8,9,3,5]}

I need:

1) Take the first row of the Severity Code of each group
2) Obtain the absolute value of all the rows (difference) of the identified severity code of each group (from # 1). Example: Group severity code 1 (4) … first row diff = 0; second row diff = 2; third row diff = 3; The same for group 2.
3) In each group, find the closest neighbor of each source, up to the severity of the first row.

I have identified the first row and indexed the severity code. When iterating, the code only uses the last indexed severity code to calculate the difference.

df = pd.DataFrame (Data)
first_row = b.groupby (['group_id']).First()
for the row in first1.itertuples (index = True, name = & # 39; Pandas & # 39;):
value = getattr (row, & # 39; Severity & # 39;)
df['dif'] = (df['Severity'] - value) .abs ()

I hope the output is in a Dataframe with a column & # 39; dif & # 39; added I can extract nearest neighbors in each group for each source where True. Repeat the process: extract the rows where True and pass the False to find additional rows with a new severity of the first row. Repeat again until there are no rows, or all rows are False.

c # – How to iterate efficiently in the data list and search data in another datatable?

I have a list that contains 25000 data and I am iterating in all the data and in each iteration, I am looking for data in another data table. All my routine is taking a long time to finish.

PrivateComid UpdateCommentFirst (string strCommentPath, string TickerName)
{
counter of int = 0;
bool QcViewAllFileExist = false;
bool QcCommentFileExist = false;
bool AllowUpdate = false;
string savepath = Convert.ToString (ConfigurationManager.AppSettings["OutputPath"]) .Trim () + TickerName + "\" + TickerName + "_QC-ViewwAll.xml";
DataSet QCCommentstmp = new DataSet ();
DataSet QCViewAlltmp = new DataSet ();

if (File.Exists (strCommentPath))
{
QCCommentstmp.ReadXml (strCommentPath);
QcCommentFileExist = true;
}

if (File.Exists (savepath))
{
QCViewAlltmp.ReadXml (savepath);
QcViewAllFileExist = true;
}

if (QcCommentFileExist && QcViewAllFileExist)
{
if (QCCommentstmp.Tables.Count> 0)
{
if (! QCCommentstmp.Tables[0].Columns.Contains ("IgnoreData"))
{
AllowUpdate = true;
}
}

if (AllowUpdate)
{

List QCCommentlist = QCCommentstmp.Tables[0].AsEnumerable ()
.Select (row => new clsCommentPopup
{
// BrokerFor, Formula, LineItem, Section, PeriodCollection
bolFollowUP = (row.Field("FollowUP")) == null? false: Convert.ToBoolean ((row.Field("Follow"))),
bolThisPeriod = (row.Field("ThisPeriod")) == null? false: Convert.ToBoolean ((row.Field("This period"))),
Formula = (row.("Formula")) == null? string.Empty: (row.Field("Formula")),
ModelValue = (row.Field("ModelValue")) == null? string.Empty: (row.Field("ModelValue")),
ExternalComment = (row.Field("ExternalComment")) == null? string.Empty: (row.Field("ExternalComment")),
InternalComment = (row.Field("InternalComment")) == null? string.Empty: (row.Field("InternalComment")),
strEndPeriod = (row.Field("EndPeriod")) == null? string.Empty: (row.Field("EndPeriod")),
strStartPeriod = (row.Field("StartPeriod")) == null? string.Empty: (row.Field("StartPeriod")),
PeriodType = (row.Field("PeriodType")) == null? string.Empty: (row.Field("PeriodType")),
SectionFor = (row.Field("Section")) == null? string.Empty: (row.Field("Section")),
LiFor = (row.Field("LineItem")) == null? string.Empty: (row.Field("Elemento en línea")),
QcPeriodFor = (row.Field("QcPeriod")) == null? string.Empty: (row.Field("QcPeriod")),
BrokerFor = (row.Field("BrokerFor")) == null? string.Empty: (row.Field("BrokerFor")),
PeriodCollection = (row.Field("PeriodCollection")) == null? string.Empty: (row.Field("PeriodCollection")),
boolIgnoreValue = (row.Field("IgnoreValue")) == null? false: Convert.ToBoolean ((row.Field("IgnoreValue"))),
IgnoreData = (! QCCommentstmp.Tables[0].Columns.Contains ("IgnoreData")? string.Empty: (row.Field("IgnoreData") == null? string.Empty: row.Field("IgnoreData")))
}).To list();


if (QCCommentlist! = null)
{
foreach (comment var in QCCommentlist)
{
string section = comment.SectionFor;
string li = comment.LiFor;
string broker = comment.BrokerFor;
string period = comment.PeriodCollection;
string strQCPeriodValue = "";

if (comment.boolIgnoreValue && period.Trim ()! = "")
{
var QcViewColumnName = QCViewAlltmp.Tables[0]. Columns.Cast() .AsParalelo ()
.Where (x => x.ColumnName.Contains (period))
.Select (x => new {x.ColumnName}). FirstOrDefault ();

if (QcViewColumnName! = null)
{
period = QcViewColumnName.ColumnName;

if (period.Trim ()! = "")
{
var datarow = QCViewAlltmp.Tables[0].AsEnumerable (). AsParallel ()
.Where (row => row .field("GroupKey"). Split (& # 39; ~ & # 39;)[0].ToUpper () == section.ToUpper ()
&& row.Field("GroupKey"). Split (& # 39; ~ & # 39;)[1].ToUpper () == li.ToUpper ()
&& row.Field("Section") .ToUpper () == broker.ToUpper ());

if (datarow! = null && datarow.Count ()> 0)
{
strQCPeriodValue = (datarow.FirstOrDefault ()[period] ! = null? datarow.FirstOrDefault ()[period].ToString (): string.Empty);
if (strQCPeriodValue.Trim ()! = string.Empty)
{
comment.IgnoreData = strQCPeriodValue;
counter ++;
}
}
}
}
}
}
}

SerializeQcComment (QCCommentlist);
toolTip1.Hide (this);
}
}
}

Loading data from an XML file into a data set.

Yes ignored field is there in the data set table QCCommentstmp QCCommentstmp.Tables[0].Columns.Contains ("IgnoreData") then deserialize the data from the QCCommentstmp table to list them List QCCommentlist

iterate in the QCCommentlist data using for looping and find data in QCViewAlltmp.Tables[0] data table for each iteration

when QCCommentlist has 25000 data then I'm iterating through all the 25000 data and finding data in another data table. If data is found, I am updating the data in the list. This process is getting very slow and the code is taking a long time to complete the entire iteration and data search in the data table.

please check my code and tell me how to restructure my code, as a result, there will be an improvement in the speed of code execution.

If my approach is incorrect, guide me with the correct approach and also tell me the relevant code that I can use in my previous code. As a result, my routine will take a minimum time to complete if I repeat more than 25,000 data. Searching for suggestions and better code to achieve the same task.

javascript: Iterate the HTML table to highlight the differences where the cells have multiple comparison elements

I want to highlight the differences between the first row of a table and all the other rows in the column.

I have discovered how to achieve this when each cell in the table only has 1 element / comparison. But I would like to extend this to multiple comparisons per cell separated by "," s.

Here is the code for the only element per cell. https://jsfiddle.net/t19Lqbkn/
using the following code:

    var table = document.getElementById ("mytab1");
for (var i = 1, row; row = table.rows[i]; i ++) {
var matc = table.rows[1]
   for (var j = 0, col; col = row.cells[j]; j ++) {
if (col.innerHTML! == matc.cells[j].innerHTML) {col.innerHTML = ""+ col.innerHTML +"";}


}
}

And here is a table with several elements per cell.
https://jsfiddle.net/6c7s9mky/

As you can see in the second link. The first column of the second row, only the element "Eva" should be red, and in the last row of the first column there should be no red text.

java – Retrieve sum data from the database or iterate at run time

I have an object (Shopping cart) that has a list of CartItems, which contains the related productstrong text and the amount purchased.

Public class ShoppingCart extends BaseEntity {

@OneToMany (mappedBy = "shoppingCart", cascade = CascadeType.ALL, orphanRemoval = true)
Private list cartItems = new ArrayList <> ();

@DateTimeFormat (pattern = "dd / MM / yyyy hh: MM: ss")
private LocalDateTime dateTime;

@Enumerated
private PaymentMethod paymentMethod = PaymentMethod.CASH;

public class CartItem extends BaseEntity {

@ManyToOne
Private product of the product;

@ManyToOne (fetch = FetchType.LAZY)
@JoinColumn
Private ShoppingCart ShoppingCart;

Public class product extends BaseEntity {

@Not empty
name of the private chain;

Private double price;

int private amount;

I have a view that summarizes all the shopping carts, and I do not know if I should save that amount within the ShoppingCart entity, or iterate to find the sum.
Here is the code, right now I am iterating.

@GetMapping ("/")
Public string getProduct (@ModelAttribute ("reportDto") ReportDto reportDto, Model model) {

double total = 0, cash = 0, credit = 0, debit = 0;
int quantity
List results
if (reportDto.getBeginDate () == null || reportDto.getEndDate () == null) {
results = shoppingCartService.findAll ();
} else {
results = shoppingCartService.findByDateTimeBetween (reportDto.getBeginDate (), reportDto.getEndDate ());
}
total = results. current ()
.mapToDouble (shoppingCart -> shoppingCart.getCartItems (). stream ()
.mapToDouble (cartItem -> cartItem.getProduct (). getPrice () * cartItem.getQuantity ()). sum ())
.sum();
quantity = results.stream (). mapToInt (
shoppingCart -> shoppingCart.getCartItems (). stream (). mapToInt (cartItem -> cartItem.getQuantity ()). sum ())
.sum();

cash = results.stream (). filter (shoppingCart -> shoppingCart.getPaymentMethod (). isCash ())
.mapToDouble (shoppingCart -> shoppingCart.getCartItems (). stream ()
.mapToDouble (cartItem -> cartItem.getProduct (). getPrice () * cartItem.getQuantity ()). sum ())
.sum();

credit = results.stream (). filter (shoppingCart -> shoppingCart.getPaymentMethod (). isCredit ())
.mapToDouble (shoppingCart -> shoppingCart.getCartItems (). stream ()
.mapToDouble (cartItem -> cartItem.getProduct (). getPrice () * cartItem.getQuantity ()). sum ())
.sum();

debit = results.stream (). filter (shoppingCart -> shoppingCart.getPaymentMethod (). isDebit ())
.mapToDouble (shoppingCart -> shoppingCart.getCartItems (). stream ()
.mapToDouble (cartItem -> cartItem.getProduct (). getPrice () * cartItem.getQuantity ()). sum ())
.sum();
model.addAttribute ("quantity", quantity);
model.addAttribute ("effective", effective);
model.addAttribute ("credit", credit);
model.addAttribute ("debit", debit);
model.addAttribute ("total", total);
return "reports / reports";
}

Also, any comments regarding the code will be appreciated!

Thanks in advance.

swift – Iterate and modify Struct in a Set

I have a Set of Ints:

var mySet = Set()

mySet.insert (1)
mySet.insert (2)
// ...

// How to iterate over it and, for example, add 1 to each element?
for (index, _) in mySet.enumerated () {
// How to subscribe? A set is messy!
}

I want to modify mySet and do some additional calculations on it. How can I change each element in mySet? I can create another temperature mySet Set and add copies to that Set and later, assign the temperature mySet Set to the original mySet, but I was wondering if there was another better way.

// Ugly method?
var mySetTemp = Set()

by value in mySet {
mySetTemp.insert (value + 1)
}

mySet = mySetTemp

Is there any way to do it up without creating additional objects, just to modify an original value in an iteration of a Set of Structures?

Set theory: what is the limit to iterate class understanding, reflection and size limitation?

When publishing on a principle of reflection together with a limitation of the size of the axiom on the theory of sets of Ackermann, the answer is that the theory rises to a cardinal of Mahlo.

I'm here wondering if this method can be iterated, and what is the most it can achieve through this iteration process.

For example, let's define a theory. $ mathsf {K} ^ {+} (V _ { lambda}) $ in the language of $ FOL (=, in, V_1, V_2, .., V _ { lambda}) $ While $ lambda $ is a specific recursive ordinal that has some specific ordinal notation, that is, whenever $ lambda < omega_1 ^ {CK} $

Now the idea is that every theory $ mathsf {K} ^ {+} (V _ { lambda}) $ has axioms of extensionality, class understanding axiom scheme for $ V _ { alpha} $, an axiom of reflection for $ V _ { alpha} $, and axiom size limitation for $ V _ { alpha} $, for each $ alpha < lambda $, we also have the axiom scheme:

Yes $ alpha < beta $, so: $ “ forall x (x subset V _ { alpha} to x in V { beta}) "$
it is an axiom

More specifically the formula of class understanding for $ V _ { alpha} $ is:

$$ for all x_1, .., x_n subseteq V _ { alpha} exists x forally (y in x leftrightarrow and in V { alpha wedge varphi (y, x_1,. ., x_n)) $$, where $ varphi (y, x_1, .., x_n) $ It is a formula that does not use primitives. $ V _ { beta} $ when $ beta> alpha $.

While the formula of the reflection scheme for $ V _ { alpha} $ it would be written as:

$$ forall x_1, .., x_n in V _ { alpha} \ [exists y (varphi(y,x_1,..,x_n)) to exists y in V_{alpha}(varphi(y,x_1,..,x_n)) ]$$ where $ varphi (y, x_1, .., x_n) $ does not use any primitive symbol $ V _ { beta} $ While $ beta geq alpha $.

Now, what is the limit of the strength of the force? $ mathsf {K} ^ {+} (V _ { lambda}) $ theories

Performance – Iterate between dates and INSERT values ​​in a performative way.

I have created a query that fills a data point with random values.
The logic is simple: Iterate between the START and FINAL dates and INSERT random values.

I want this consultation to be very successful. For example, fill every second of a year with values ​​(what with this code will last ages). I am new to SQL statements of this complexity and I do not know the traps nor of them.

Are there some hidden areas in my code that can be improved? If I replace the random function with only a coded value, will it cause a lot of momentum?
It's a loop with a lot INSERT IN waste of time; Is there a better way to insert (some type of insertion in batches)?

DO $$
DECLARE --Variables
NODE_ID bigint: = 11; - The node ID of the data point.
TIMESTAMP_START TIMESTAMP: = & # 39; 2018-12-06 22: 00: 00 & # 39 ;;
TIMESTAMP_END TIMESTAMP: = & # 39; 2018-12-10 00: 00: 00 & # 39 ;;
TS_STEP INTERVAL: = & # 39; 30 minutes & # 39 ;;

MAX_VALUE integer: = 100;

START
TIE
EXIT WHEN TIMESTAMP_START> TIMESTAMP_END;

INSERT INTO datapoint_values ​​(dp_id, ts, datatype, source, int_value, float_value)
VALUES (NODE_ID, TIMESTAMP_START, 2, 0, floor (random () * (MAX_VALUE + 1)), 0);

TIMESTAMP_START: = TIMESTAMP_START + TS_STEP;
Final loop;
END OF $$;

Why can not I iterate?

I find a problem when calculating the following equations:

do[1, 0] : = q (cc - a);
do[0, 1] : = p (d - b);
do[0, 0] : = 1;
do[i_, 0] : = -c[i - 1, 0] ((i - 1 + q) p a - p q cc) / ((i + q) p - p q);
do[0, j_] : = -c[0, j - 1] ((q p b - (j - 1 + p) q d) / ((q p - (j + p) q)));
Yes[h == 0, c[i_, j_] = 0, c[i_, j_] : = - (c[i - 1, j] ((i - 1 + q) p a - (j + p) q cc) + c[i, j - 1] ((i + q) p b - (j - 1 + p) q d)) / ((i + q) p - (j + p) q)];

Then, when I enter $ c[1,1]$, it comes out $ c[1,1]$

enter the description of the image here

I do not not know what happens. I hope someone can help. Thank you!