apps – android – Get automatic checkbox in multimodal choice checkbox

I have a list view and have added a multimodal options listener that allows me to check and uncheck the checkboxes. The problem is that I want that particular checkbox in the row to be checked by default where the listener is called. For example, if I pressed the fifth row so that the check box in the fifth row is automatically activated.

Here is my code for the cursor adapter class:

public void bindView(View view, Context context, Cursor cursor) {
    // Find fields to populate in inflated template
    TextView nameView = (TextView) view.findViewById(R.id.name);
    TextView summaryView = (TextView) view.findViewById(R.id.summary);
    // Extract properties from cursor
    int nameColumnIndex = cursor.getColumnIndex(SongContract.SongEntry.COLUMN_SONG_NAME);
    int linkColumnIndex = cursor.getColumnIndex(SongContract.SongEntry.COLUMN_SONG_LINK);

    String songName = cursor.getString(nameColumnIndex);
    String songLink = cursor.getString(linkColumnIndex);


    // Populate fields with extracted properties
    nameView.setText(songName);
    summaryView.setText(songLink);

    CheckBox checkBox = (CheckBox) view.findViewById(R.id.checkbox1);
    LinearLayout rowLayout = view.findViewById(R.id.listLayout);
    checkBox.setTag(cursor.getPosition());
    if(DownloadedFragment.isLongPressed)
    {
        checkBox.setVisibility(View.VISIBLE);
        if(DownloadedFragment.pressed == true){
            CheckBox checkBox1 = rowLayout.findViewById(R.id.checkbox1);
            checkBox1.setChecked(true);
            DownloadedFragment.pressed = false;
        }
        //pressed = true;
        //checkBox.setVisibility(View.VISIBLE);
        rowLayout.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                if(checkBox.getVisibility()==View.VISIBLE) {
                    if (checkBox.isChecked() == true) {
                        checkBox.setChecked(false);
                    } else {
                        checkBox.setChecked(true);
                    }
                }
            }
        });
        //notifyDataSetChanged();

    } else {
        checkBox.setVisibility(View.GONE);
    }
    if(list.contains(cursor.getPosition())){
        checkBox.setChecked(true);
    }else{
        checkBox.setChecked(false);
    }
}

After creating the first list with choice fields, I can't search for those fields: how do I maintain data integrity and still links?

Stack Exchange Network

The Stack Exchange network consists of 175 question and answer communities, including Stack Overflow, the largest and most trusted online community for developers to learn, share their insights, and develop their careers.

Visit Stack Exchange

postgresql 11: add case statement where clause leads to bad plan choice

We have a lot of procedures that use case declarations in the where clauses; Essentially, we want to run the same procedures at different times, and we set "flags" in the database to change the behavior of the procedures as a function of time, so that we reprocess the least amount of data needed when we execute these procedures. The case statements take those flags into account.

I think it is clear as mud, so let me give an example:
At times, we may want to rebuild the entire contents of a table, so we could run something like

insert into table_a select * from table_b

At other times, we just want to rebuild rows that match certain conditions:

insert into table_a select * from table_b where (business logic)

Queries can get quite complex, and we do this in many different procedures, so it would be bad practice to rewrite all queries for each scenario. So instead we do this:

update flag_table set flag = 't';

then at the top of the procedure we declare a variable:

declare _flag boolean;

. . .

_flag = (select a.flag from from flag_table a);

. . .

insert into table_a select * from table_b where case when _flag then (business logic) end

Obviously, this is a rough approximation of our practices; I also say "procedures", but they are postgres user-defined functions, just to be exhaustive.

The following code recreates the problem that inspired this thread:

drop table if exists t_test; create temp table t_test as
    select  md5(random()::text) as test_text,
            (current_date - (random() * interval '5 years'))::date as date
    from generate_series (1,1000000);

drop table if exists t_sub; create temp table t_sub as
    select * from t_test order by random() limit 100000;

--efficient:
drop table if exists t_result; create temp table t_result as
    select  a.*
    from t_test a
    where   exists
        (
            select 1 from t_sub b where a.test_text = b.test_text and a.date between b.date - interval '6 months' and b.date + interval '6 months'
        );

--inefficient:
drop table if exists t_result; create temp table t_result as
    select  a.*
    from t_test a
    where   case when 1=1 then 
               exists
               (
                   select 1 from t_sub b where a.test_text = b.test_text and a.date between b.date - interval '6 months' and b.date + interval '6 months'
               )
            end;

Here is the query plan for the query labeled efficient:

Hash Semi Join  (cost=7453.88..54133.48 rows=58801 width=36)
  Hash Cond: (a.test_text = b.test_text)
  Join Filter: ((a.date >= (b.date - '6 mons'::interval)) AND (a.date <= (b.date + '6 mons'::interval)))
  ->  Seq Scan on t_test a  (cost=0.00..40086.54 rows=1058418 width=36)
  ->  Hash  (cost=4011.54..4011.54 rows=105918 width=36)
        ->  Seq Scan on t_sub b  (cost=0.00..4011.54 rows=105918 width=36)

Here is the query plan for the query that I have tagged as inefficient:

Seq Scan on t_test a  (cost=0.00..95755427.48 rows=529209 width=36)
  Filter: (SubPlan 1)
  SubPlan 1
    ->  Seq Scan on t_sub b  (cost=0.00..5335.51 rows=59 width=0)
          Filter: ((a.test_text = test_text) AND (a.date >= (date - '6 mons'::interval)) AND (a.date <= (date + '6 mons'::interval)))

The inefficient plan takes forever to execute (I gave up after 30 seconds anyway). In our actual case, we're not actually running this on millions of rows, just 10s of thousands, but due to what I'll call server issues beyond our control, even in our smallest case, the second query took several minutes to complete . So of course my question is, why is a different plan chosen for the second consultation, and is there anything we can do about it?

Please yell at me if I have left something essential; This is just my second question here. Thank you.

mysql – choice of database

I have files that contain tables in pdf that I transformed into Excel and that have the same structure and others do not.

so my problem of searching all of your files for a keyword entered by the user and all files are unrelated to each other (i.e. the foreign key doesn't exist in any of them).

My big concern is if it is possible to do the search in all the tables and the choice of my base and a little confusing for me how to handle your files in a single database (access, mysql, mongodb, hfsql from windev, … ))

so i am undecided about solving my problem

that it has the best of all databases that can help me with my problem and that my problem has a solution.

PS: I have searched the entire network and found nothing for my problem.
help me please I'm in the gallery and thanks in advance

enterprise sharepoint: how to show hidden columns when 2 or more checkboxes are checked in a choice field

Struggling with this for a while:
I think I need an OR statement somewhere
I am trying to hide the columns when the Male or Female boxes are selected

$ (function () {

$("nobr:contains('If Male')").closest('tr').hide();
$("nobr:contains('If Female')").closest('tr').hide();

$ ("nobr: contains (& # 39; Address & # 39;)"). closest (& # 39; tr & # 39;). hide ();

$("span(title='Male')>input").change(function(){
    if($(this).is(":checked"))|| $("span(title='Female')>input").change(function(){
    if($(this).is(":checked"))
         $("nobr:contains('If Male')").closest('tr').show();
        $("nobr:contains('If Female')").closest('tr').show();

$ ("nobr: contains (& # 39; Address & # 39;)"). closest (& # 39; tr & # 39;). Show ();
}plus{

$("nobr:contains('If Male')").closest('tr').hide();
        $("nobr:contains('If Female')").closest('tr').hide();

$ ("nobr: contains (& # 39; Address & # 39;)"). closest (& # 39; tr & # 39;). hide ();
}

});

2013 – SharePoint column limits. Column of choice

Could you explain the limits of the SharePoint column?

I read several articles about the limitation of the choice column. For example here and here.

From the MS article:
Maximum value of the choice field: 255 and column size: 30 bytes

Maximum value of the choice field (multiple selection): 350 and column size: 22 bytes

My question is:

Why can I add 10 options for the choice column and do not face any limitation problems? Each option contains 50 characters.

And what does 30 bytes mean? The My Choice column contains 10 * 50 characters. 500char = 500bytes.

Is the official or ShareGate information correct and up to date?

enter the description of the image here
enter the description of the image here

xslt: how to make the choice options available to the user in an XSL file

I have a SharePoint list that a column is configured as a choice option. I take that list to a page and apply an XSL file that stylizes the appearance of the items in the list. However, I need the end user to be able to really see the options column and be able to select an option (such as 1,2 or 3). Then, when a button is clicked (such as a shipment), the choice of users will be completed in the actual list.
The purpose is for the user to indicate in which training they wish to enroll, from the first option to the third option. The other items in the list are visible to the end user.
I hope this makes sense? Not sure if this is possible?
Thank you
CSM

design: choice of the right "open source" license for codes

In terms of programming, what defines open source?

With regard to licenses, the definitions most frequently referred to are the Open Source Initiative (OSI) open source definition or the Free Software Foundation (FSF) free software definition.

The only restriction I want to have is that if someone, including companies, wants to make a profit, turning it into a SaaS or "Software as a service" implemented in a public cloud, they must first acquire a commercial license.

What it describes is not an open source license and is likely not to meet the requirements in both the OSI definition and the FSF definition. Although you may have a commercial product that is also released under an open source license, an open source license would not restrict the free distribution of source code or derivative works. An open source license cannot prevent commercial use.

There are licenses designed to make it difficult for people to benefit from open source. A good example is the general public license of Affero. This license requires that the software accessed through a network (as in the case of an SAAS application) have the corresponding source code available for end users. So, if someone took their software and created an SAAS application, they would have to release the code and any of its modifications to the users.

Double licensing is also a way to avoid this. For example, a business client may want to host the SAAS application and not publish any changes it makes. The software may be available under two or more licenses. In exchange for payment, the software can be delivered to a customer under a different open source license or even a custom license that would not require them to release the source code to their customers.

What combo will be better? Lens choice (Sony E Mount)

I have a a7 III and Tamron 28-75 2.8. I am taking photos (70% of my work) and videos (25%). Now I am planning to buy some premium lenses. What better decision will it be: Sony Zeiss 50 1.4 ZA or Sigma 35mm Art 1.4 cousin combo (for Sony E) + 85mm 1.8 FE.
I am also planning to buy Tamron 17-28 mm in the future because sometimes I like to shoot wide.
For me, these 2 variants have the same price (Sony Zeiss 50 mm ZA used = Sigma 35 mm 1.4 used + new 85 mm FE 1.8).
What's your opinion about it? Which option will be better?
Thank you

query performance – PostgreSQL: filtering in the array and order produces an incorrect choice of plan / index incorrect

This is the story if it's a nightmare … I have this table:

                                        Table "subscriptions"
   Column   |            Type             | Collation | Nullable |                  Default                  
------------+-----------------------------+-----------+----------+-------------------------------------------
 id         | bigint                      |           | not null | nextval('subscriptions_id_seq'::regclass)
 project_id | integer                     |           |          | 
 endpoint   | character varying           |           |          | 
 created_at | timestamp without time zone |           | not null | 
 uid        | character varying           |           |          | 
 tags       | character varying()         |           |          | '{}'::character varying()
 trashed_at | timestamp without time zone |           |          | 
 ...

Indexes:
    "subscriptions_pkey" PRIMARY KEY, btree (id)
    "index_subscriptions_on_endpoint" btree (endpoint)
    "index_subscriptions_on_project_id_and_created_at" btree (project_id, created_at DESC)
    "index_subscriptions_on_project_id_and_tags" gin (project_id, tags) WHERE trashed_at IS NULL
    "index_subscriptions_on_project_id_and_tags_using_btree" btree (project_id, tags) WHERE trashed_at IS NULL
    "index_subscriptions_on_project_id_and_trashed_at" btree (project_id, trashed_at DESC)
    "index_subscriptions_on_project_id_and_uid" btree (project_id, uid) WHERE trashed_at IS NULL

I have this query:

EXPLAIN (ANALYZE, BUFFERS) SELECT "subscriptions".* FROM "subscriptions" WHERE "subscriptions"."project_id" = 12345 AND "subscriptions"."trashed_at" IS NULL AND ((tags @> ARRAY('crt:2020_02')::varchar())) ORDER BY "subscriptions"."created_at" DESC LIMIT 30 OFFSET 0;

Note: It is a SaaS that produces the query in the database based on user input (sent through a REST API), so I have no total control over the tags used for filtering (which can also include AND conditions , OR and NOT). Therefore, do not focus on this exact label filter: just consider that there are some conditions in tags (variable). Also note that the tags are not predefined: they are attached to the rows on the progress by the customer and I have no control over the tags (they can be any string).

This is the result:

          QUERY PLAN                                                                                          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.56..13577.18 rows=30 width=382) (actual time=1195740.956..1195740.956 rows=0 loops=1)
   Buffers: shared hit=5918464 read=3226736 dirtied=1537
   ->  Index Scan using index_subscriptions_on_project_id_and_created_at on subscriptions  (cost=0.56..5556910.68 rows=12279 width=382) (actual time=1195740.951..1195740.951 rows=0 loops=1)
         Index Cond: (project_id = 12345)
         Filter: ((trashed_at IS NULL) AND (tags @> '{crt:2020_02}'::character varying()))
         Rows Removed by Filter: 9202438
         Buffers: shared hit=5918464 read=3226736 dirtied=1537
 Planning Time: 4.319 ms
 Execution Time: 1195741.912 ms
(9 rows)

As you can see, this simple query takes several minutes.

I have tried the following:

Force the use of a different index

I simply change the ORDER BY to this:

ORDER BY ("subscriptions"."created_at" + interval '0 days') DESC

And time passes from several minutes to a few ms:

                                                                                QUERY PLAN                                                                                
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=63775.08..63775.16 rows=30 width=391) (actual time=566.728..566.728 rows=0 loops=1)
   Buffers: shared hit=5375 read=2492
   ->  Sort  (cost=63775.08..63856.39 rows=32522 width=391) (actual time=566.727..566.727 rows=0 loops=1)
         Sort Key: ((created_at + '00:00:00'::interval)) DESC
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=5375 read=2492
         ->  Bitmap Heap Scan on subscriptions  (cost=1200.28..62814.56 rows=32522 width=391) (actual time=566.709..566.709 rows=0 loops=1)
               Recheck Cond: ((project_id = 12345) AND (tags @> '{crt:2020_02}'::character varying()) AND (trashed_at IS NULL))
               Buffers: shared hit=5372 read=2492
               ->  Bitmap Index Scan on index_subscriptions_on_project_id_and_tags  (cost=0.00..1192.15 rows=32522 width=0) (actual time=566.706..566.706 rows=0 loops=1)
                     Index Cond: ((project_id = 12345) AND (tags @> '{crt:2020_02}'::character varying()))
                     Buffers: shared hit=5372 read=2492
 Planning Time: 2.511 ms
 Execution Time: 566.827 ms
(14 rows)

The problem is that it is a trick and produces terrible results and even secondary scanning when thousands of rows match the condition.

However, it shows that PG could choose a better plan!

To have better plans, try this:

ALTER TABLE subscriptions ALTER project_id SET STATISTICS 10000;
ALTER TABLE subscriptions ALTER tags SET STATISTICS 10000;

CREATE STATISTICS stats_on_subscriptions ON project_id, tags FROM subscriptions;

VACUUM ANALYZE subscriptions;

Nothing changes.

I've tried (just one at a time) the following indexes:

CREATE INDEX CONCURRENTLY idx_subscriotion_fields_1 ON subscriptions (created_at DESC, project_id, tags);

CREATE INDEX CONCURRENTLY idx_subscriotion_fields_2 ON subscriptions (project_id, created_at DESC, tags);

CREATE INDEX CONCURRENTLY idx_subscriotion_fields_3 ON subscriptions (project_id, tags, created_at DESC);

However, PG ignores them completely: the planner continues to use index_subscriptions_on_project_id_and_created_at. I really have no idea why: I think idx_subscriotion_fields_1 It would be a perfect choice for most queries (whether they return some rows or many).

I have even tried to temporarily disable index_subscriptions_on_project_id_and_created_at: Instead of using the other indexes, PG chooses a sequential scan.

I have tried to make autovacuum and frequent adjustments random_page_cost = 1. Nothing changes.

I've also tried to find out if I can use hints, but unfortunately PG does not allow them πŸ™ It is frustrating. Otherwise, from my application, I could use a count (which is quick for the same queries) and point PG in the right direction.

Filtering some rows in the labels and taking the most recent elements does not seem a complex necessity.

However, in this large table, PG has many problems and I cannot find a solution. I begin to think that PG has several missing features (good statistics on matrix values, index suggestion, btree indexes ordered on matrix values, support for multi-tenancy, etc.) and is not suitable for large data. I hope to be wrong since we have built the entire SaaS in PG. Each post keeps saying the same things I have already tried: any help would be greatly appreciated! Thank you!